<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://airwiki.deib.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=DavideMigliore</id>
		<title>AIRWiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://airwiki.deib.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=DavideMigliore"/>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php/Special:Contributions/DavideMigliore"/>
		<updated>2026-04-04T09:13:04Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.25.6</generator>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=13632</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=13632"/>
				<updated>2011-09-26T09:33:46Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=d.migliore@evidence.eu.com&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
|status=inactive&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 to December 2008 he was a PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
&lt;br /&gt;
He was a Postdoc at IDSIA (Lugano, Switzerland) working at the EU project IMCLEVER.&lt;br /&gt;
&lt;br /&gt;
Currently he is project manager at Evidence Srl.&lt;br /&gt;
&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.evidence.eu.com]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=13631</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=13631"/>
				<updated>2011-09-26T09:32:57Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Who I am? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{PhD&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=davide@idsia.ch&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
|status=inactive&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 to December 2008 he was a PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
&lt;br /&gt;
He was a Postdoc at IDSIA (Lugano, Switzerland) working at the EU project IMCLEVER.&lt;br /&gt;
&lt;br /&gt;
Currently he is project manager at Evidence Srl.&lt;br /&gt;
&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.dei.polimi.it/personale/dettaglio.php?id_persona=438&amp;amp;id_sezione=&amp;amp;lettera=M&amp;amp;idlang=ita DEI homepage]&lt;br /&gt;
&lt;br /&gt;
[http://www.idsia.ch/~migliore IDSIA homepage]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9430</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9430"/>
				<updated>2009-11-23T12:44:07Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Researcher&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=davide@idsia.ch&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 to December 2008 he was a PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
&lt;br /&gt;
Currently he is a Postdoc at IDSIA (Lugano, Switzerland).&lt;br /&gt;
&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.dei.polimi.it/personale/dettaglio.php?id_persona=438&amp;amp;id_sezione=&amp;amp;lettera=M&amp;amp;idlang=ita DEI homepage]&lt;br /&gt;
&lt;br /&gt;
[http://www.idsia.ch/~migliore IDSIA homepage]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9429</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9429"/>
				<updated>2009-11-23T12:43:47Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Researcher&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=davide@idsia.ch&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 to December 2008 he was a PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
&lt;br /&gt;
Currently he is a Postdoc at IDSIA (Lugano, Switzerland).&lt;br /&gt;
&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.dei.polimi.it/personale/dettaglio.php?id_persona=438&amp;amp;id_sezione=&amp;amp;lettera=M&amp;amp;idlang=ita DEI homepage]&lt;br /&gt;
[http://www.idsia.ch/~migliore IDSIA homepage]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9428</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9428"/>
				<updated>2009-11-23T12:43:13Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Who I am? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Researcher&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=migliore@elet.polimi.it&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 to December 2008 he was a PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
&lt;br /&gt;
Currently he is a Postdoc at IDSIA (Lugano, Switzerland).&lt;br /&gt;
&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.dei.polimi.it/personale/dettaglio.php?id_persona=438&amp;amp;id_sezione=&amp;amp;lettera=M&amp;amp;idlang=ita DEI homepage]&lt;br /&gt;
[http://www.idsia.ch/~migliore IDSIA homepage]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9427</id>
		<title>User:DavideMigliore</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:DavideMigliore&amp;diff=9427"/>
				<updated>2009-11-23T12:41:57Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Researcher&lt;br /&gt;
|firstname=Davide Antonio&lt;br /&gt;
|lastname=Migliore&lt;br /&gt;
|email=migliore@elet.polimi.it&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea=Computer Vision and Image Analysis; Robotics&lt;br /&gt;
|photo=DavideMigliore davide_migliore.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Who I am? ==&lt;br /&gt;
Davide Migliore was born in Milan, Italy, in 02/02/1981. He received the Laurea degree in Computer Science Engineering in 2005 from the Politecnico di Milano with the final mark of 100/100 cum laudae. From January 2006 he is PhD student at the Department of Electronics and Information of the Politecnico di Milano, with a scholarship sponsored by the Italian Institute of Technology (IIT).&lt;br /&gt;
The arguments of his research are: navigation systems for mobile robots, Simultaneous Localization And Mapping (SLAM), robotics for impaired people, algorithms for videosurveillance and environment monitoring, objects recognition and uncertain geometry.&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
[http://www.dei.polimi.it/personale/dettaglio.php?id_persona=438&amp;amp;id_sezione=&amp;amp;lettera=M&amp;amp;idlang=ita DEI homepage]&lt;br /&gt;
[http://www.idsia.ch/~migliore IDSIA homepage]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Rawseeds&amp;diff=6267</id>
		<title>Rawseeds</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Rawseeds&amp;diff=6267"/>
				<updated>2009-05-12T03:34:28Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Other Politecnico di Milano people */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:Rawseeds small.jpg|right|350px]]&lt;br /&gt;
&lt;br /&gt;
== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
RAWSEEDS, or ''Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets''.&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
Project '''RAWSEEDS''' has the aim of gathering and publishing on the web:&lt;br /&gt;
*high-quality '''data sets''', obtained by exploring indoor and outdoor environments with  suitably equipped mobile robots;&lt;br /&gt;
*problems (called '''Benchmark Problems''' or BPs) defined on the above datasets, each of them including a description of the methodology to use when evaluating a solution for it;&lt;br /&gt;
*algorithms (called '''Benchmark Solutions''' or BSs) that solve the BPs, along with their output when applied to the associated problems and with the results of the evaluation of such output using the criteria defined in the BP.&lt;br /&gt;
&lt;br /&gt;
RAWSEEDS' BPs and BSs are mainly focused towards the problems of ''localization'', ''mapping'' and SLAM (Simultaneous Localization And Mapping) in robotics. As a whole, they are called ''Rawseeds' Benchmarking Toolkit'' because they can be used to assess and  compare algorithms.&lt;br /&gt;
By the way, RAWSEEDS means ''Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets''.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2006/01/11&lt;br /&gt;
&lt;br /&gt;
End date: 2009/30/04&lt;br /&gt;
&lt;br /&gt;
=== Internet site(s) ===&lt;br /&gt;
You can learn more (and download everything that RAWSEEDS has produced until now) from the official website of the project, here: http://www.rawseeds.org.&lt;br /&gt;
Through the website you can also ''upload your own Benchmark Solutions''.&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
==== Project leader ====&lt;br /&gt;
Matteo Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
==== Other Politecnico di Milano people ====&lt;br /&gt;
&lt;br /&gt;
Giulio Fontana - [[User:GiulioFontana]]&lt;br /&gt;
&lt;br /&gt;
Davide Migliore - [[User:DavideMigliore]]&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
&lt;br /&gt;
No students are currently involved.&lt;br /&gt;
&lt;br /&gt;
==== External personnel: ====&lt;br /&gt;
&lt;br /&gt;
Davide Rizzi ([[User:DavideRizzi]]) is actively working on the project.&lt;br /&gt;
&lt;br /&gt;
Moreover, three foreign partners working are collaborating to the RAWSEEDS project. They are:&lt;br /&gt;
&lt;br /&gt;
[http://www.informatik.uni-freiburg.de/welcome-en/view?set_language=en Albert-Ludwigs-Universität Freiburg, Germany] (Institut für Informatik)&lt;br /&gt;
&lt;br /&gt;
[http://diis.unizar.es/ Universidad de Zaragoza, Spain] (Depto. Informática e Ingeniería de Sistemas)&lt;br /&gt;
&lt;br /&gt;
Università degli Studi di Milano-Bicocca, Italy (Dip. di Informatica, Sistemistica e Comunicazione), [http://www.ira.disco.unimib.it/ lab. I.R.A.], with these people:&lt;br /&gt;
* prof. [http://www.disco.unimib.it/sorrenti Domenico G. Sorrenti]&lt;br /&gt;
* dr. [http://www.disco.unimib.it/link/page.jsp?id=164077610 Daniele Marzorati]&lt;br /&gt;
* mr. Axel Furlan&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include significant amounts of mechanical work and little electrical and electronic activity. Potentially risky activities are the following:&lt;br /&gt;
* Use of mechanical tools. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Transportation of heavy loads (e.g. robots).  Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Robot testing.  Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
Please refer to http://rawseeds.org for details, results, and everything else.&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Rawseeds&amp;diff=6266</id>
		<title>Rawseeds</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Rawseeds&amp;diff=6266"/>
				<updated>2009-05-12T03:34:07Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Other Politecnico di Milano people */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:Rawseeds small.jpg|right|350px]]&lt;br /&gt;
&lt;br /&gt;
== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
RAWSEEDS, or ''Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets''.&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
Project '''RAWSEEDS''' has the aim of gathering and publishing on the web:&lt;br /&gt;
*high-quality '''data sets''', obtained by exploring indoor and outdoor environments with  suitably equipped mobile robots;&lt;br /&gt;
*problems (called '''Benchmark Problems''' or BPs) defined on the above datasets, each of them including a description of the methodology to use when evaluating a solution for it;&lt;br /&gt;
*algorithms (called '''Benchmark Solutions''' or BSs) that solve the BPs, along with their output when applied to the associated problems and with the results of the evaluation of such output using the criteria defined in the BP.&lt;br /&gt;
&lt;br /&gt;
RAWSEEDS' BPs and BSs are mainly focused towards the problems of ''localization'', ''mapping'' and SLAM (Simultaneous Localization And Mapping) in robotics. As a whole, they are called ''Rawseeds' Benchmarking Toolkit'' because they can be used to assess and  compare algorithms.&lt;br /&gt;
By the way, RAWSEEDS means ''Robotics Advancement through Web-publishing of Sensorial and Elaborated Extensive Data Sets''.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2006/01/11&lt;br /&gt;
&lt;br /&gt;
End date: 2009/30/04&lt;br /&gt;
&lt;br /&gt;
=== Internet site(s) ===&lt;br /&gt;
You can learn more (and download everything that RAWSEEDS has produced until now) from the official website of the project, here: http://www.rawseeds.org.&lt;br /&gt;
Through the website you can also ''upload your own Benchmark Solutions''.&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
==== Project leader ====&lt;br /&gt;
Matteo Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
==== Other Politecnico di Milano people ====&lt;br /&gt;
&lt;br /&gt;
Giulio Fontana - [[User:GiulioFontana]]&lt;br /&gt;
Davide Migliore - [[User:DavideMigliore]]&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
&lt;br /&gt;
No students are currently involved.&lt;br /&gt;
&lt;br /&gt;
==== External personnel: ====&lt;br /&gt;
&lt;br /&gt;
Davide Rizzi ([[User:DavideRizzi]]) is actively working on the project.&lt;br /&gt;
&lt;br /&gt;
Moreover, three foreign partners working are collaborating to the RAWSEEDS project. They are:&lt;br /&gt;
&lt;br /&gt;
[http://www.informatik.uni-freiburg.de/welcome-en/view?set_language=en Albert-Ludwigs-Universität Freiburg, Germany] (Institut für Informatik)&lt;br /&gt;
&lt;br /&gt;
[http://diis.unizar.es/ Universidad de Zaragoza, Spain] (Depto. Informática e Ingeniería de Sistemas)&lt;br /&gt;
&lt;br /&gt;
Università degli Studi di Milano-Bicocca, Italy (Dip. di Informatica, Sistemistica e Comunicazione), [http://www.ira.disco.unimib.it/ lab. I.R.A.], with these people:&lt;br /&gt;
* prof. [http://www.disco.unimib.it/sorrenti Domenico G. Sorrenti]&lt;br /&gt;
* dr. [http://www.disco.unimib.it/link/page.jsp?id=164077610 Daniele Marzorati]&lt;br /&gt;
* mr. Axel Furlan&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at AIRLab/Lambrate. It will include significant amounts of mechanical work and little electrical and electronic activity. Potentially risky activities are the following:&lt;br /&gt;
* Use of mechanical tools. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Use of soldering iron. Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Transportation of heavy loads (e.g. robots).  Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Robot testing.  Standard safety measures described in [http://airlab.elet.polimi.it/index.php/airlab/content/download/461/4110/file/documento_valutazione_rischi_AIRLab.pdf Safety norms] will be followed.&lt;br /&gt;
* Use of a modified (human-guided) golf cart. We will use the cart only in open-air environments.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
Please refer to http://rawseeds.org for details, results, and everything else.&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=6012</id>
		<title>Cameras, lenses and mirrors</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=6012"/>
				<updated>2009-04-21T11:05:29Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Lenses */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==IMPORTANT NOTES==&lt;br /&gt;
'''Never touch the sensor element (CCD or CMOS) of a camera with anything!''' It can very easily be scratched.&lt;br /&gt;
&lt;br /&gt;
'''Never touch the glass elements of a lens with your hands!''' The oil from human skin is harmful.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Cameras and frame grabbers==&lt;br /&gt;
===Cameras===&lt;br /&gt;
In the AIRLab you can find different kind of cameras. These are the main groups:&lt;br /&gt;
*'''Analogue cameras'''. Video output is given as an electrical signal, which needs analogue-to-digital conversion to be processed by a computer; this is done by a specific card called ''frame grabber'' or ''video capture card'' (the latter tend to be the lowest-performance items; see [[Cameras, lenses and mirrors#Frame grabbers]] for details). Analogue video is outdated for computer vision and robotics applications, due to its cost, low performance and complexity; nowadays digital camera systems (such as all the ones listed below) are always preferred.&lt;br /&gt;
*'''USB cameras'''. Usually very cheap, they are suitable for low-performance applications (i.e. those where low frame rate is needed and low image quality can be accepted). Their main advantage (along with cost) is the fact that every modern computer has USB ports. The fact that the USB standard includes 5V DC power supply lines helps simplifying camera design and use.&lt;br /&gt;
*'''FireWire cameras'''. The FireWire (or IEEE1394) bus is generally used for low-end industrial cameras, i.e. devices with technical characteristics much superior to those typical of USB cameras but low-performance according to typical machine vision standards. Industrial cameras usually give to the user a much wider control over the acquisition parameters compared to consumer cameras, and therefore they are usually preferred in robotics; their downside is the higher cost. There are different versions of IEE1394 link (see http://en.wikipedia.org/wiki/Firewire for details), with different bitrates, starting from the 400Mbit/s FireWire 400. Generally they are all considered superior to USB 2.0, even if theoretical bandwidth is lower for FireWire 400. Firewire ports can include power supply lines, but some interfaces (and in particular those on portable computers) omit them. Although the use of FireWire interfaces has expanded in recent years, they are not yet considered a standard feature for motherboards.&lt;br /&gt;
*'''GigE Vision cameras'''. GigE Vision (or Gigabit Ethernet Vision) is a rather new connection standard for machine vision, based upon the established Ethernet protocol in its Gigabit (i.e. 1000Mbps) version. It is very interesting, as complex multiple-camera systems can be easily built using existing (Gigabit) Ethernet hardware, such as cables and switches. Vision data is acquired simply through a generic Ethernet port, commonly found on motherboards or easily added. However, 100Mbps (or ''fast Ethernet'') ports are not guaranteed to work and can sustain only modest video streams; on the other hand, 1000Mbps ports are now standard on motherboards, so this will not be a problem anymore in a few years. It seems that GigE Vision is becoming the most common interface for low- to medium-performance industrial cameras.&lt;br /&gt;
*'''CameraLink cameras'''. Cameralink is a high-speed interface expressly developed for high-performance machine vision applications. It is a point-to-point link, i.e. a CameraLink connection is used to connect a single camera to a digital acquisition card (''frame grabber''). Its diffusion is limited to applications where extreme frame rates ''and'' resolutions are needed, because CameraLink gear is very expensive.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the cameras available in the AIRLab. (To be precise, it is a list of the cameras that are modern enough to be useful.) For each of them the main specifications (and a link to the full specifications) are given. Details on the different types of lens mount are given below in [[Cameras, lenses and mirrors#Lenses]]. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it.&lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are taking it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!resolution&lt;br /&gt;
!B/W, color&lt;br /&gt;
!max. frame rate&lt;br /&gt;
!sensor size&lt;br /&gt;
!interface&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!lens mount&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications and/or manuals&lt;br /&gt;
|-&lt;br /&gt;
|1628x1236&lt;br /&gt;
|B/W&lt;br /&gt;
|24fps&lt;br /&gt;
|1/1.8&amp;quot;&lt;br /&gt;
|CameraLink&lt;br /&gt;
|Hitachi&lt;br /&gt;
|KP-F200CL&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|[[media:KP-F200-Op_Manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|752x480&lt;br /&gt;
|color&lt;br /&gt;
|70fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC750C&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|659x493&lt;br /&gt;
|color&lt;br /&gt;
|90fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC650C&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|RAWSEEDS&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|1024x768&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC1020C&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|Lambrate (2/2)&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|CCIR (625 lines)&lt;br /&gt;
|B/W&lt;br /&gt;
|CCIR (50fps, interlaced)&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|analogue&lt;br /&gt;
|Sony&lt;br /&gt;
|XC-ST70CE&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|DEI (2/2)&lt;br /&gt;
|&lt;br /&gt;
|[[media:XCST70E_manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i 400 industrial&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_400_Industrial.htm&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i board camera&lt;br /&gt;
|proprietary&lt;br /&gt;
|8&lt;br /&gt;
|Lambrate (3/8), Bovisa (2/8), [[User:PaoloCalloni]] (1/8), [[User:DavideMigliore]] (1/8), [[User:CristianoAlessandro]] (1/8)&lt;br /&gt;
|RAWSEEDS (2/8), MRT (?/8)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i digital camera&lt;br /&gt;
|fixed optics (4.3mm, f2.0)&lt;br /&gt;
|4&lt;br /&gt;
|Univ. Mi-Bicocca (4/4)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_DC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480 dual sensor, 9cm baseline&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|STOC stereo-on-a-chip stereo camera&lt;br /&gt;
|C-mount, fitted with two 3.5mm, f1.6, 1/2&amp;quot; lenses&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate =&amp;gt; li lin office =&amp;gt; Domenicogsorrenti 13.01.09 =&amp;gt; giulio fontana 23.01.09&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/stoc.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|60fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|DCSG (associated with STOC)&lt;br /&gt;
|C-mount, fitted with one 3.5mm, f1.6, 1/2&amp;quot; lens&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/dcsg.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Frame grabbers===&lt;br /&gt;
As previously said, a '''frame grabber''' is an electronic board that connects to one or more cameras, and converts the signals from the cameras into a data stream that can be elaborated by a computer. They are usually designed as expansion boards to be fitted into the computer case. Frame grabbers are necessary for ''analogue cameras'' (as they include the analogue/digital converters) or for CameraLink digital cameras (in this case the frame grabber is essentially a high speed dedicated digital interface). Other kinds of digital cameras don't need a frame grabber: this is one of the main advantages of digital cameras over analogue ones in machine vision applications, where the processing is almost always performed by computers.&lt;br /&gt;
In the AIRLab two models of frame grabber are available:&lt;br /&gt;
*a digital frame grabber from Euresys, model Expert 2, having two CameraLink inputs (http://www.euresys.com/Products/grablink/GrablinkSeries.asp). ''Notes: needs a PCI-X slot; one of the inputs is not working due to a fault.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two single-channel analogue frame grabbers from Matrox, models Meteor and Meteor Pro (http://www.matrox.com/imaging/support/old_products/home.cfm).&lt;br /&gt;
All the frame grabbers (except the one on the MO.RO.1) are currently in AIRLab/DEI. If you move one of them, please '''write it down here'''... and do it NOW!&lt;br /&gt;
&lt;br /&gt;
==Lenses==&lt;br /&gt;
Industrial cameras usually have interchangeable lenses. This allows for the choice of the lens that is more suitable to the considered application. There are two main standards for industrial camera lenses: '''C-mount''' and '''CS-mount'''. Both are screw-type mounts. CS-mount is simply a modified C-mount where the distance between the back of the lens and the sensor element (CCD or CMOS) is shorter: therefore a C-mount lens can be mounted on a CS-mount camera if an ''adapter ring'' (i.e. a distancing cylinder with suitable threads) is placed between them. It is impossible, though, to use a CS-mount lens on a C-mount camera: if you try you will almost certainly break the sensor, scratch the lens, or both. Just because a lens fits a camera, it doesn't mean it can be actually mounted on it!&lt;br /&gt;
&lt;br /&gt;
At the AIRLab we also use lenses specifically designed for Unibrain's ''board cameras'': they are very simple, with no iris, and very small. Their mounting system is an M12x0.5 metric screw thread.&lt;br /&gt;
&lt;br /&gt;
Be aware that sensor dimension (i.e. its diagonal, measured in fractions of an inch) is  ''not'' the same for all cameras. Therefore one of the key specifications for a lens is the maximum sensor dimension supported. If you use a given lens with too big a sensor, the edges of the image will be black as they lie outside the circle of the projected image. Also beware of the strange convention used for sensor diagonals, i.e. a fraction in the form A/B&amp;quot; where A and B are integer ''or non-integer'' numbers. For instance an 1/2&amp;quot; sensor is smaller than an 1/1.8&amp;quot; one.&lt;br /&gt;
The variability of sensor dimensions has another side effect: the same lens has a different angle of view if you change the sensor size. Therefore the same lens can behave as a wide-angle with a large sensor and as a telephoto with a small sensor.&lt;br /&gt;
&lt;br /&gt;
An useful guide to lenses (in Italian or English) can be found at http://www.rapitron.it/guidaob.htm.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the actual lenses available in the AIRLab. For each of them the main specifications (and a link to the maker's or vendor's page for full specifications) are given. A '?' means an unknown parameter: if you know its value or experimentally find out it when using the lens (e.g. the maximum sensor size), please ''update the table'' before the information is lost again! Lenses having 'M12x0.5' in Column 'mount type' are only usable with Unibrain's Fire-i board cameras. A 'YES' in the 'Mpixel' column indicates a so-called ''Megapixel lens'', i.e. a high quality, low-distortion lens designed for high-resolution industrial cameras (typically having large sensors); please note that some of these are specifically designed for B/W (i.e. black and white) cameras. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it. &lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are bringing it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!focal length&lt;br /&gt;
!max. aperture&lt;br /&gt;
!max. sensor size&lt;br /&gt;
!mount type&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!Mpixel&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications&lt;br /&gt;
|-&lt;br /&gt;
|3.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|LURCH&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.0mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV0420&lt;br /&gt;
|YES (B/W only)&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.8mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|M0518&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.computar.com/cctvprod/computar/mono/048.html&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR26014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|4&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (4/4)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR38014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8.5mm&lt;br /&gt;
|f1.3&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|(old model)&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR31214MCN&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|15mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV1520&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|6-15mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|12.5-75mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|2.1mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2042&lt;br /&gt;
|NO&lt;br /&gt;
|6&lt;br /&gt;
|Bovisa (1/6), Lambrate (4/6), Davide Migliore (1/6)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm, no IR filter&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2046&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2043&lt;br /&gt;
|NO&lt;br /&gt;
|3&lt;br /&gt;
|Bovisa (1/3), Lambrate (2/3)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2044&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Mirrors==&lt;br /&gt;
Much work has been done and is being done at the AIRLab on the topic of '''omnidirectional (machine) vision''' (sometimes referred to as ''omnivision''). Omnidirectional vision systems use special hardware to overcome the limitations of conventional vision systems in terms of field of view. The approach to this problem that we generally adopt is the use of conventional cameras in association with convex '''mirrors''', i.e. the capturing of the image reflected by a suitably-shaped mirror with a camera. The possibility of designing mirrors with specific geometric properties gives a very useful means to control the geometric behaviour of the whole camera+mirror system.&lt;br /&gt;
&lt;br /&gt;
TODO for someone who knows better ;-) : mirror list&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Bureaucracy&amp;diff=4721</id>
		<title>Bureaucracy</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Bureaucracy&amp;diff=4721"/>
				<updated>2008-11-06T17:38:08Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page contains the things you need to know and do to be allowed to work in the AIRLab. It is especially targeted to students.''&lt;br /&gt;
&lt;br /&gt;
== HOW TO become a registered user of AIRWiki ==&lt;br /&gt;
To become one of the [[Registered users]], you must request a user account for the AIRWiki. To do that, you can ask your Advisor or co-Advisor.&lt;br /&gt;
If you need more information send an email to either migliore (at) elet (dot) polimi (dot) it or eynard (at) elet (dot) polimi (dot) it.&lt;br /&gt;
&amp;lt;!-- an email containing your name, the name of your Advisor and the name of your project to either migliore (at) elet (dot) polimi (dot) it or eynard (at) elet (dot) polimi (dot) it.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
If you are a student beginning her/his work within the AIRLab, please note that you ''must'' be a registered user before you can even enter the Lab. You must also be aware that anything you put into the private layer of AIRWiki will be '''published on the internet and visible by all the world'''. Always keep in mind the [[Registered users#Warnings|warnings]]!&lt;br /&gt;
&lt;br /&gt;
== HOW TO get the authorization to access the Lab ==&lt;br /&gt;
''Note: you cannot access the AIRLab without being authorized, and you can't let anyone who is not authorized into the AIRLab.''&lt;br /&gt;
&lt;br /&gt;
In order to obtain access to any AIRLab site (see [[The Labs]]), you need to follow these steps. From their description it seems a lot of work, but it's not: just read the following instructions ''before'' starting (well, this should be a general rule...).&lt;br /&gt;
&lt;br /&gt;
* First of all, it is ''mandatory'' that you carefully '''read the [[Safety norms]] and the [[AIRLab rules]]''' for AIRLab Users. These documents are written in Italian: if you aren't able to read them, ask the Advisor responsible for your Project to translate for you the parts concerning your work.&lt;br /&gt;
&lt;br /&gt;
* You must then '''get a registered user account''' (if you haven't got one yet). See [[Bureaucracy#HOW TO become a registered user of AIRWiki|here]] for details.&lt;br /&gt;
&lt;br /&gt;
* As soon as you are one of AIRWiki's [[Registered users]], you must '''fill in your user page with your personal data'''. To do that, log in to the AIRWiki (just use the link on top right of the [[Main Page]]), then click on the &amp;quot;people&amp;quot; link in the &amp;quot;navigation&amp;quot; tab on the left to go to the [[Special:Listusers]] page. There you will find a list of user pages: look for your own (yes, there it is!). Click on the link and fill in the page. The data you are required to put in are: first name, surname, &amp;quot;numero di matricola&amp;quot;, name of your Advisor, name of other Teacher(s) you work with, link to the AIRWiki pages of the project(s) you are working on (see the following for this), and your photo. You can copy the layout of other user pages (some of them include a fine table for all the data) if you prefer: just go to the 'edit' tab of those pages and copy all that you need into the 'edit' tab of your page (I said COPY, not CUT: be careful not to alter other people's pages).&lt;br /&gt;
&lt;br /&gt;
* Subsequently, you must '''set up an AIRWiki page for the Project''' you are about to start working on (if someone else didn't already do that: check for that on the [[Projects]] page). Don't worry, it's very easy: just follow the instructions you find [[Projects#HOWTO add a new project to the AIRWiki|here]]. When you have done that, remember to go back to your user page and put there a link to the page of your Project.&lt;br /&gt;
&lt;br /&gt;
* Then, you have to '''fill in the [http://airlab.elet.polimi.it/index.php/airlab/content/download/615/5306/file/Modulo%20registrazione%20accessi%202007%20con%20allegato%20071024.pdf Access Registration Form]''', specifying which AIRLab sites (see [[The Labs]]) you need to enter. Sign it and '''have your Advisor sign it''' too. Note that by signing the form you declare that you have read:&lt;br /&gt;
** the [[Safety norms]] of the AIRLab;&lt;br /&gt;
** the document &amp;quot;Procedure generali di emergenza&amp;quot; of the Department, which is part of the form itself.&lt;br /&gt;
&lt;br /&gt;
* Once the form is signed by you and your Advisor, it has to be verified and '''signed by professor Bonarini''', head of the AIRLab ([[User:AndreaBonarini]]). This step has been made mandatory to check that people actually fill in their AIRWiki pages, as students tended to &amp;quot;forget&amp;quot; that :-( . Send him an e-mail with &amp;quot;access to AIRLab&amp;quot; as subject, to ask him when you can go to his office for the signature (surprise appearances are not very welcome).&lt;br /&gt;
&lt;br /&gt;
* Now the Access Registration Form have to be '''signed by the head of the Informatics Section''' of the DEI. Just leave the form to the Secretaries on the first floor of the Department of Electronics and Information, and come back in a day or two (they will tell you exactly when).&lt;br /&gt;
&lt;br /&gt;
* Finally, '''leave the Access Registration Form and your student's ID card to Mrs. Ivanov''' (DEI, 3rd floor). In this way you will be registered as an authorized AIRLab user, and your ID card will get the ability to open the door of the AIRLab sites you requested. This step usually requires about a week.&lt;br /&gt;
&lt;br /&gt;
'''* * * * * You are now allowed to access the AIRLab!! * * * * *'''&lt;br /&gt;
&lt;br /&gt;
== HOW TO connect your laptop to the Internet ==&lt;br /&gt;
&lt;br /&gt;
If you own a laptop computer, you can request an authorization to connect it to the (wired) LAN of the Department of Electronics and Information (DEI). This allows connecting to all the online resources of the DEI and to the internet. You will need to fill in this [http://airlab.elet.polimi.it/index.php/airlab/content/download/1416/10885/file/presarete.pdf form], have it signed by the Teacher responsible for your Project, and return it to the network administrators' office (DEI, 1st floor, Sardi or Busnelli). Note that you will have to specify the MAC address of your network interface card: instructions about how to get it are given on the form.&lt;br /&gt;
&lt;br /&gt;
NOTE: if you often work in AIRLab/Lambrate, you can also get an account to use the Linux PCs available there; this will give you a home directory (accessible from any of those PCs) and internet access. This is completely separate from access to DEI's LAN: you can require such an account by writing an e-mail to prof. Matteucci &amp;lt;matteucci (at) elet (dot) polimi (dot) it&amp;gt;.&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4498</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4498"/>
				<updated>2008-10-17T10:33:11Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Human-computer interaction via voice recognition system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=We want develop a system to allow a voice interaction between the user and the wheelchair.&lt;br /&gt;
This project consists in develop one of the solutions proposed in literature and extended the LURCH software to include this kind of interface. &lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* Phinx project [http://cmusphinx.org/]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=2.5-10&lt;br /&gt;
|image=LURCH_wheelchair.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=GIFT and features extraction and description&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The idea is to improve and optimize the solution proposed by Campari et al. in their paper, who propose to estimate invariant descriptor using geodesic features descriptor based on color information.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=Palla_GIFT.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Multimedia Indexing Framework&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a framework for multimedia indexing.&lt;br /&gt;
The idea is create an images database indexer that allows to make query using images or strings.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*CBIR system definition [http://en.wikipedia.org/wiki/CBIR]&lt;br /&gt;
*Image database [http://www.cs.washington.edu/research/imagedatabase/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=CIR.gif&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4497</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4497"/>
				<updated>2008-10-17T10:32:36Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Human-computer interaction via voice recognition system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=We want develop a system to allow a voice interaction between the user and the wheelchair.&lt;br /&gt;
This project consists in develop one of the solutions proposed in literature and extended the LURCH software to include this kind of interface. &lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* Phinx project [http://cmusphinx.org/]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=2.5-10&lt;br /&gt;
|image=LURCH_wheelchair.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=GIFT and features extraction and description&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The idea is to improve and optimize the solution proposed by Campari et al. in their paper, who propose to estimate invariant descriptor using geodesic features descriptor based on color information.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=Palla_GIFT.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Multimedia Indexing Framework&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a framework for multimedia indexing.&lt;br /&gt;
The idea is create an images database indexer that allows to make query using images or strings.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*CBIR system definition [http://en.wikipedia.org/wiki/CBIR]&lt;br /&gt;
*Image database [http://www.cs.washington.edu/research/imagedatabase/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=CIR.gif&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:CIR.gif&amp;diff=4496</id>
		<title>File:CIR.gif</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:CIR.gif&amp;diff=4496"/>
				<updated>2008-10-17T10:32:25Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4495</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4495"/>
				<updated>2008-10-17T10:28:16Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Human-computer interaction via voice recognition system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=We want develop a system to allow a voice interaction between the user and the wheelchair.&lt;br /&gt;
This project consists in develop one of the solutions proposed in literature and extended the LURCH software to include this kind of interface. &lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* Phinx project [http://cmusphinx.org/]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=2.5-10&lt;br /&gt;
|image=LURCH_wheelchair.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=GIFT and features extraction and description&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The idea is to improve and optimize the solution proposed by Campari et al. in their paper, who propose to estimate invariant descriptor using geodesic features descriptor based on color information.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=Palla_GIFT.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4494</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4494"/>
				<updated>2008-10-17T10:27:21Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Human-computer interaction via voice recognition system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=We want develop a system to allow a voice interaction between the user and the wheelchair.&lt;br /&gt;
This project consists in develop one of the solutions proposed in literature and extended the LURCH software to include this kind of interface. &lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* Phinx project [http://cmusphinx.org/]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=2.5-10&lt;br /&gt;
|image=LURCH_wheelchair.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=GIFT and features extraction and description&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The idea is to improve and optimize the solution proposed by Campari et al. in their paper, who propose to estimate invariant descriptor using geodesic features descriptor based on color information.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-3&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=Palla_GIFT.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Palla_GIFT.jpg&amp;diff=4493</id>
		<title>File:Palla GIFT.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Palla_GIFT.jpg&amp;diff=4493"/>
				<updated>2008-10-17T10:27:19Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4492</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4492"/>
				<updated>2008-10-17T10:20:30Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Human-computer interaction via voice recognition system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=We want develop a system to allow a voice interaction between the user and the wheelchair.&lt;br /&gt;
This project consists in develop one of the solutions proposed in literature and extended the LURCH software to include this kind of interface. &lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
* Phinx project [http://cmusphinx.org/]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=2.5-10&lt;br /&gt;
|image=LURCH_wheelchair.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4491</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4491"/>
				<updated>2008-10-17T10:16:56Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Trinocular Vision System (SUGR)&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A Trinocular Vision System is a device composed by three cameras that allows to measure 3D data (in this case segments) directly from images.&lt;br /&gt;
The aim of this tesina/project is to implement a trinocular algorithm based on SUGR, a library for Uncertain Projective Geometry.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Trinoex.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Trinoex.jpg&amp;diff=4490</id>
		<title>File:Trinoex.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Trinoex.jpg&amp;diff=4490"/>
				<updated>2008-10-17T10:16:40Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4489</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4489"/>
				<updated>2008-10-17T10:09:04Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Catadioptric MonoSLAM &lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this work is to investigate a SLAM solutions based on catadioptric camera, integrating the solution presented in literature into an existing frameword.&lt;br /&gt;
Improvements could be the basis for a tesi.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Visual SLAM by Single Catadioptric Stereo [http://cv2.kaist.ac.kr/VisualSLAMBySingleCameraCatadioptricStereo.pdf]&lt;br /&gt;
*Catadioptric reconstruction [http://citeseer.ist.psu.edu/cache/papers/cs/23657/http:zSzzSzwww.cis.upenn.eduzSz~cgeyerzSzsfm_tr.pdf/geyer01structure.pdf]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Photo.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Photo.jpg&amp;diff=4488</id>
		<title>File:Photo.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Photo.jpg&amp;diff=4488"/>
				<updated>2008-10-17T10:08:39Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4487</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4487"/>
				<updated>2008-10-17T10:04:50Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:DavideMigliore|Davide Migliore]] ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4486</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4486"/>
				<updated>2008-10-17T10:03:49Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=Matteo Matteucci ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Davide Migliore ([mailto:migliore%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Object.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Object.jpg&amp;diff=4485</id>
		<title>File:Object.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Object.jpg&amp;diff=4485"/>
				<updated>2008-10-17T10:03:45Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4484</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4484"/>
				<updated>2008-10-17T09:59:53Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Analysis of patch recognition algorithms&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=Extract distinctive features from images is very important in computer vision application.&lt;br /&gt;
It can be used in algorithms for tasks like matching different views of an object or scene (e.g. for stereo vision) and object recognition.&lt;br /&gt;
The aim of this work is to integrate in an existent framework the existing solution proposed in literature.&lt;br /&gt;
&lt;br /&gt;
Skills&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
References:&lt;br /&gt;
*Oxford website [http://www.robots.ox.ac.uk/~vgg/research/affine/index.html]&lt;br /&gt;
*Hess website [http://web.engr.oregonstate.edu/~hess/index.html]&lt;br /&gt;
*Feature FAST [http://mi.eng.cam.ac.uk/~er258/work/fast.html]&lt;br /&gt;
&lt;br /&gt;
|start= &lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4483</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4483"/>
				<updated>2008-10-17T09:49:32Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4482</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4482"/>
				<updated>2008-10-17T09:47:45Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 &lt;br /&gt;
|cfu=5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=MonoSLAM system implementation&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=The aim of this proposal is to investigate the different monocamera SLAM solution proposed in literature.&lt;br /&gt;
After a deepen bibliography research, the work will be focused on developing one of these algorithms into an existing framework and, only for tesi option, investigate possible improvements. &lt;br /&gt;
&lt;br /&gt;
The algorithms interested are based on [http://www-personal.acfr.usyd.edu.au/tbailey/software/slam_simulations.htm]:&lt;br /&gt;
*Extended Kalman Filter [http://www.doc.ic.ac.uk/~ajd/publications.html]&lt;br /&gt;
*Unscented Kalman Filter [http://www.cs.unc.edu/~welch/kalman/media/pdf/Julier1997_SPIE_KF.pdf]&lt;br /&gt;
*FastSLAM [http://robots.stanford.edu/papers.html]&lt;br /&gt;
*GraphSLAM [http://mi.eng.cam.ac.uk/~ee231/]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=KC_jc_third.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:KC_jc_third.jpg&amp;diff=4481</id>
		<title>File:KC jc third.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:KC_jc_third.jpg&amp;diff=4481"/>
				<updated>2008-10-17T09:37:29Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Theses&amp;diff=4480</id>
		<title>First Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Theses&amp;diff=4480"/>
				<updated>2008-10-17T09:33:02Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for first level thesis (7.5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2&lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4479</id>
		<title>First Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4479"/>
				<updated>2008-10-17T09:32:23Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Progetto di Ingegneria Informatica&amp;quot; and &amp;quot;Progetto di Robotica&amp;quot; (5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponents strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start from avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions&lt;br /&gt;
* Data acquisition by using biological sensors during the playing experience&lt;br /&gt;
* Off-line classification of data with available tools&lt;br /&gt;
* Design and development of an on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists in the realization of one or more phases depending on the difficulty/cfu to be achieved and on the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomendation system) able to capture your emotional state (interests, excitement, anger, joy) while watching at images, earing sounds etc. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while interacting with the robot. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patient's needs. We believe the quality of the therapy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the therapy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the available robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the therapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an applicationable to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analyzing his biological signals, which mirror the physiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its physiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Video surveillance system for indoor Environment&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system based on background subtraction algorithm. The idea is to use a single static camera to track moving objects in a known environment. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for camera network.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2&lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Imu_cam_big_sphere.gif}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Imu_cam_big_sphere.gif&amp;diff=4478</id>
		<title>File:Imu cam big sphere.gif</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Imu_cam_big_sphere.gif&amp;diff=4478"/>
				<updated>2008-10-17T09:31:35Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4477</id>
		<title>First Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4477"/>
				<updated>2008-10-17T09:30:03Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Progetto di Ingegneria Informatica&amp;quot; and &amp;quot;Progetto di Robotica&amp;quot; (5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of an existing genetic algorithm for ERP-based BCIs&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=Different [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] (ERPs) are used in [[Brain-Computer Interface|BCIs]], e.g., [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300] and error potentials.&lt;br /&gt;
A [http://en.wikipedia.org/wiki/Genetic_algorithm genetic algorithm] (GA) for ERP feature extraction has been developed at the Airlab.  The GA has been proved to work, but there different ways that can explored to further develop this algorithm and expand its application field.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++&lt;br /&gt;
:Good programming skill required&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:B. Dal Seno, M. Matteucci, L. Mainardi. ''A Genetic Algorithm for Automatic Feature Extraction in P300 Detection'' [http://ieeexplore.ieee.org/search/srchabstract.jsp?arnumber=4634243&amp;amp;isnumber=4633757&amp;amp;punumber=4625775]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=Ga-scheme.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Linux&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
Depending on the effort the student is willing to put into it, the project can grow to a full experimental thesis.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reproduction of an algorithm for the recognition of error potentials&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:BernardoDalSeno|Bernardo Dal Seno]]&lt;br /&gt;
|description=Error potentials (ErrPs) are [http://en.wikipedia.org/wiki/Event-related_potential event-related potentials] present in the EEG (electroencephalogram) when a subject makes a mistake or when the machine a subject is interacting with works in an expected way.  They could be used in the [[Brain-Computer Interface|BCI]] field to improve the performance of a BCI by automatically detecting classification errors.&lt;br /&gt;
The project aims at reproducing algorithms for ErrP detection from the literature.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
:P.W. Ferrez, J. Millán. ''You Are Wrong! Automatic Detection of Interaction Errors from Brain Waves'' [ftp://ftp.idiap.ch/pub/reports/2005/ferrez_2005_ijcai.pdf]&lt;br /&gt;
:G. Schalk et al. ''EEG-based communication: presence of an error potential'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=13882457&amp;amp;volume=111&amp;amp;issue=12&amp;amp;firstpage=2138&amp;amp;form=html]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-15&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponents strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start from avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions&lt;br /&gt;
* Data acquisition by using biological sensors during the playing experience&lt;br /&gt;
* Off-line classification of data with available tools&lt;br /&gt;
* Design and development of an on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists in the realization of one or more phases depending on the difficulty/cfu to be achieved and on the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomendation system) able to capture your emotional state (interests, excitement, anger, joy) while watching at images, earing sounds etc. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while interacting with the robot. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patient's needs. We believe the quality of the therapy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the therapy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the available robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the therapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an applicationable to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analyzing his biological signals, which mirror the physiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its physiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Emotion from interaction&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to detect emotional states, such as stress or boreness from the interaction with the computer via mouse and keyboard ([http://airwiki.elet.polimi.it/mediawiki/index.php/Emotion_from_Interaction Emotion from Interaction]). A library getting data from these devices has been already developed. Data have to be acquired in different situations and analyzed by neural networks or other classification tools already implemented.&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Video surveillance system for indoor Environment&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system based on background subtraction algorithm. The idea is to use a single static camera to track moving objects in a known environment. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for camera network.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Learning API for TORCS&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. The goal of this project is to extend the existing C++ API (available [http://cig.dei.polimi.it/ here]) to simplify the development of controller using a learning framework.&lt;br /&gt;
Such an extension can be partially developed by porting an existing Java API for TORCS that already provides a lot of functionalities for machine learning approaches.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= TORCS competition&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques.&lt;br /&gt;
The goal of this project is to apply any machine learning technique to develop a successfull controller following the competition rules (available [http://cig.dei.polimi.it/?page_id=67 here])&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2&lt;br /&gt;
|cfu=5 to 12.5&lt;br /&gt;
|image=TORCS.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Calibration of IMU-camera system&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]], [[User:DavideMigliore|Davide Migliore]]&lt;br /&gt;
|description=This work is about the problem to calibrate a system composed by an XSense &lt;br /&gt;
Inertial Measurement Unit and a Fire-i Camera. The pro ject will be focus on &lt;br /&gt;
the problem to estimate both unknown rotation between the two devices and the &lt;br /&gt;
extrinsic/intrinsic parameters of the camera. This algorithm allows to use the &lt;br /&gt;
system for SLAM or robotics applications, like a wereable device for autonomous &lt;br /&gt;
navigation or augmented reality. &lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab/C++&lt;br /&gt;
&lt;br /&gt;
;Links&lt;br /&gt;
:Matlab Toolbox for mutual calibration [http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Toolbox.html]&lt;br /&gt;
:List of pubblications[http://www.deec.uc.pt/~jlobo/InerVis_WebIndex/InerVis_Pubs.php]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=}}&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robocup: soccer robots&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it), Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to finalize the team of robots that will participate to the robocup world championship in Graz next summer (see the [http://www.robocup.org Robocup page] and the [http://robocup.elet.polimi.it MRT Team page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Implementation of mechanical and electronical parts of the robots for the management of the ball and kicking&lt;br /&gt;
* Design of robot behaviors (fuzzy systems)&lt;br /&gt;
* Coordination of robots&lt;br /&gt;
* New sensors&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots. Participation to the championships is a unique experience (2000 people, with 800 robots playing all sort of games...)&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by facing different problems in depth.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=RIeRO.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4317</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4317"/>
				<updated>2008-10-06T13:37:51Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= As soon as possible&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4316</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4316"/>
				<updated>2008-10-06T13:37:13Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start= now&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4315</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4315"/>
				<updated>2008-10-06T13:36:36Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4314</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4314"/>
				<updated>2008-10-06T13:36:08Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Visual Merchandising&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop algorithms to count the number of products on the shelves of a market.&lt;br /&gt;
The idea is to use a calibrated camera to recognize the shelves, estimate the scale and improve the image quality. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library &lt;br /&gt;
* Matlab (optionally) &lt;br /&gt;
* Linux&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=VisualM.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:VisualM.jpg&amp;diff=4313</id>
		<title>File:VisualM.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:VisualM.jpg&amp;diff=4313"/>
				<updated>2008-10-06T12:47:34Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4312</id>
		<title>First Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4312"/>
				<updated>2008-10-06T11:56:41Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Progetto di Ingegneria Informatica&amp;quot; and &amp;quot;Progetto di Robotica&amp;quot; (5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponents strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start from avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions&lt;br /&gt;
* Data acquisition by using biological sensors during the playing experience&lt;br /&gt;
* Off-line classification of data with available tools&lt;br /&gt;
* Design and development of an on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists in the realization of one or more phases depending on the difficulty/cfu to be achieved and on the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomendation system) able to capture your emotional state (interests, excitement, anger, joy) while watching at images, earing sounds etc. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while interacting with the robot. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patient's needs. We believe the quality of the therapy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the therapy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the available robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the therapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an applicationable to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analyzing his biological signals, which mirror the physiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its physiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Video surveillance system for indoor Environment&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system based on background subtraction algorithm. The idea is to use a single static camera to track moving objects in a known environment. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for camera network.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=2.5-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Machine Learning ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4311</id>
		<title>First Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4311"/>
				<updated>2008-10-06T11:55:59Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Computer Vision and Image Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Progetto di Ingegneria Informatica&amp;quot; and &amp;quot;Progetto di Robotica&amp;quot; (5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponents strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start from avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions&lt;br /&gt;
* Data acquisition by using biological sensors during the playing experience&lt;br /&gt;
* Off-line classification of data with available tools&lt;br /&gt;
* Design and development of an on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists in the realization of one or more phases depending on the difficulty/cfu to be achieved and on the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomendation system) able to capture your emotional state (interests, excitement, anger, joy) while watching at images, earing sounds etc. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while interacting with the robot. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patient's needs. We believe the quality of the therapy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the therapy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the available robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the therapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an applicationable to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analyzing his biological signals, which mirror the physiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its physiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Video surveillance system for indoor Environment&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system based on background subtraction algorithm. The idea is to use a single static camera to track moving objects in a known environment. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for camera network.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Machine Learning ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4310</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4310"/>
				<updated>2008-10-06T11:55:20Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4309</id>
		<title>Master Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Course_Projects&amp;diff=4309"/>
				<updated>2008-10-06T11:52:16Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Laboratorio di Intelligenza Artificiale e Robotica&amp;quot; (5 CFU for each student) and &amp;quot;Soft Computing&amp;quot; (1 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Environment Monitoring&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a video surveillance system to track in 3D vehicles or people. &lt;br /&gt;
The idea is to use one or more calibrated camera to estimate the position and the trajectories of the moving objects in the scene. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
* Geometry/Image processing&lt;br /&gt;
* Probabilistic robotics/IMAD&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for a generic outdoor environment monitored.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponets strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start form avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the playing experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogames design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomenadation system) able to capture your emotional state (interests, excitement, anger, joy) while whatching to images, sounds etc. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment .  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while intereacting with the robot. The application will measure your excitement by analizing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patien's needs. We believe the quality of the theraphy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the theraphy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the avaliable robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the thrapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an application that is able to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analizing his biological signals, which mirror the phisiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particolar emotions. &lt;br /&gt;
* Data acquisition by usign biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with avaliable tools.&lt;br /&gt;
* Desing and develop of on-line classifier sistem for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its phisiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
The project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning Competition&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=This project has the goal of participating to (and possibly winning ;)) the 2009 Reinforcement Learning competition. To have an idea of what participate to such a competition means you can have a look at the website of the [http://rl-competition.org/content/view/51/79/ 2008 RL competition].&lt;br /&gt;
The problems that will be proposed are still unknown. As soon as the domains will be published, the work will start by analyzing their main characteristics and, then we will identify which RL algorithms are most suited for solving such problems. After an implementation phase, the project will required a long experimental period to tune the parameters of the learning algorithms in order to improve the performance as much as possible.&lt;br /&gt;
|start=January, 2009&lt;br /&gt;
|number=2-4&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=keepaway.gif}}&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4308</id>
		<title>First Level Course Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=First_Level_Course_Projects&amp;diff=4308"/>
				<updated>2008-10-06T11:44:33Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find a list of project proposals for the courses of &amp;quot;Progetto di Ingegneria Informatica&amp;quot; and &amp;quot;Progetto di Robotica&amp;quot; (5 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== BioSignal Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective VideoGames&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive video game (Car game, Shoot them up, Strategic game ..) able to adapt its behaviour in order to maximize your enjoyment. The game will measure your excitement by analizing your biological signals, which mirror your emotional state. The system will be able to adjust some parameters (i.e difficulty of car game circuits, opponents strength ...) in order to keep you egnagemet constant: &amp;quot;In your flow zone!&amp;quot;. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the game (it is possible to start from avaliable open source game)&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions&lt;br /&gt;
* Data acquisition by using biological sensors during the playing experience&lt;br /&gt;
* Off-line classification of data with available tools&lt;br /&gt;
* Design and development of an on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the game reacts to the user emotional state changing its behaviour.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists in the realization of one or more phases depending on the difficulty/cfu to be achieved and on the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=AffectiveGaming.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective recognition in multimedia contexts&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive multimedia application (advertisement, e-learning, reccomendation system) able to capture your emotional state (interests, excitement, anger, joy) while watching at images, earing sounds etc. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to give feedback on the quality of multimedia content (i.e goodness of the advertisement, enjoyment of the movie ...) &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the multimedia application.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the multimedia experience.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the multimedia application will provide contents according to your enjoyment.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools and multimedia application design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=MultimediaAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Affective robotics&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an rehabilitation robotic game able to capture your emotional state (interests, excitement, anger, joy, stress) while interacting with the robot. The application will measure your excitement by analyzing your biological signals, which mirror your emotional state. The system could be used to adapt the therapy (executed by the game) according to the patient's needs. We believe the quality of the therapy is related to the subject's emotional state. The long term goal is to keep the user into a specific emotional state in order to maximize the therapy efficacy. &lt;br /&gt;
Project phases: &lt;br /&gt;
* Design and implementation of the robotic game on the available robot.&lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors during the interaction with the robot.&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the therapy will be adapted to the patient's needs.  &lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=SimoAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Driving companions&lt;br /&gt;
|tutor= Cristiano Alessandro (alessandro-AT-elet-DOT-polimi-DOT-it),  Simone Tognetti (togetti-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an applicationable to capture your emotional state (stress, attention level .. ) while driving standard cars. The application will measure the driver's stress level by analyzing his biological signals, which mirror the physiological state, and could be used to give feedbacks to the driver in dangerous situations.&lt;br /&gt;
Project phases: &lt;br /&gt;
* Design of experimental protocol used to stimulate particular emotions. &lt;br /&gt;
* Data acquisition by using biological sensors while driving in different conditions (city, highway, country ..)&lt;br /&gt;
* Off-line classification of data with available tools.&lt;br /&gt;
* Design and development of on-line classifier system for emotion recognition &lt;br /&gt;
* Closed loop control: the car will give audio/visual feedbacks to the user letting him know its physiological state&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with biological-data acquisition tools, robots and videogame design. &lt;br /&gt;
&lt;br /&gt;
Each project consists on the realization of one or more phases depending on the difficulty/cfu to be achieved and to the competences of&lt;br /&gt;
the candidate(s)&lt;br /&gt;
&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 3 &lt;br /&gt;
|cfu=2.5 to 20&lt;br /&gt;
|image=CarAffective.jpg}}&lt;br /&gt;
&lt;br /&gt;
==== Computer Vision and Image Analysis ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Videosurveillance system based on Background Subtraction&lt;br /&gt;
|tutor=Matteo Matteucci (matteucci-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a videosurveillance system based on background subtraction algorithm. The idea is to use a single static camera to track moving objects in a known environment. &lt;br /&gt;
The skills required for this project are:&lt;br /&gt;
* C/C++ and OpenCV library&lt;br /&gt;
* Linux o.s.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis extending the algorithm for camera network.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=Danch4.png &lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Machine Learning ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Ontologies and Semantic Web ====--&amp;gt;&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Simulation of 6-DOF Robot Manipulator&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this project is to develop a simulator for a 6-DOF robot manipulator, using the [http://www.ode.org/ ode] (open dynamics engine) library for simulating the rigid body dynamics. The project involves three different phases:&lt;br /&gt;
* Building the physical model of the manipulator&lt;br /&gt;
* Implementing the forward and inverse kinematic routines &lt;br /&gt;
* Implementing the trajectory planning routines&lt;br /&gt;
* Implementing the control modules&lt;br /&gt;
* Implementing an interface to control the robot movements&lt;br /&gt;
&lt;br /&gt;
This project allows to put into practice what has been explained during the first part of the course of Robotics.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis, by using the simulated manipulator to perform some learning experiments.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=2-3&lt;br /&gt;
|cfu=10-15&lt;br /&gt;
|image=puma6dof1.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots&lt;br /&gt;
* Design of the game and a new suitable robot&lt;br /&gt;
* Implementation/setting of a suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
The project can be turned into a thesis by producing a new game and robot.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=5-12.5&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Danch4.png&amp;diff=4307</id>
		<title>File:Danch4.png</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Danch4.png&amp;diff=4307"/>
				<updated>2008-10-06T11:43:06Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Particle_filter_for_object_tracking&amp;diff=3964</id>
		<title>Particle filter for object tracking</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Particle_filter_for_object_tracking&amp;diff=3964"/>
				<updated>2008-09-17T10:35:23Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Dates */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Particle filter for object tracking.&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
&lt;br /&gt;
The aim of this project is to construct a robust particle filter for object tracking, able to follow a moving object given its starting position (in a fixed scene).&lt;br /&gt;
To obtain this goal, we compare different similarity measures, such as color histograms and joint spatial-color mixtures of gaussians, in different color spaces (RGB, HSV).&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
&lt;br /&gt;
Start date: 01/01/2008&lt;br /&gt;
&lt;br /&gt;
End date: 15/07/2008&lt;br /&gt;
&lt;br /&gt;
=== Internet site(s) ===&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
==== Project head(s) ====&lt;br /&gt;
&lt;br /&gt;
Matteo Matteucci - matteucc (at) elet (dot) polimi (dot) it&lt;br /&gt;
&lt;br /&gt;
==== Other Politecnico di Milano people ====&lt;br /&gt;
&lt;br /&gt;
Davide Migliore - migliore (at) elet (dot) polimi (dot) it&lt;br /&gt;
&lt;br /&gt;
==== Students ====&lt;br /&gt;
&lt;br /&gt;
Manuel Fossati - manuel (dot) fossati (at) mail (dot) polimi (dot) it&lt;br /&gt;
&lt;br /&gt;
==== External personnel: ====&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Since laboratory work for this project is limited to software related activities, there are no potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3654</id>
		<title>Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3654"/>
				<updated>2008-06-22T14:40:38Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page is a repository of links to the pages describing the '''projects''' we are currently working on at AIRLab. &lt;br /&gt;
See the list of our finished projects on the [[Finished Projects]] page.''&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
''by research area (areas are defined in the [[Main Page]]); for each project a name and a link to its AIRWiki page is given''&lt;br /&gt;
&lt;br /&gt;
==== [[Agents, Multiagent Systems, Agencies]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Multiagent cooperation|Multiagent cooperating system]]&lt;br /&gt;
&lt;br /&gt;
* [[Planning in Ambient Intelligence scenarios| Planning in Ambient Intelligence scenarios]]&lt;br /&gt;
&lt;br /&gt;
==== [[BioSignal Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
====== [[Affective Computing]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Relatioship between Cognition and Emotion in Rehabilitation Robotics]]&lt;br /&gt;
* [[Driving companions]]&lt;br /&gt;
* [[Emotion from Interaction]]&lt;br /&gt;
* [[Affective Devices]]&lt;br /&gt;
&lt;br /&gt;
====== [[Brain-Computer Interface]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]]&lt;br /&gt;
* [[Mu and beta rhythm-based BCI]]&lt;br /&gt;
&lt;br /&gt;
====== [[Automatic Detection Of Sleep Stages]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Sleep Staging with HMM]]&lt;br /&gt;
&lt;br /&gt;
====== [[Analysis of the Olfactory Signal]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]]&lt;br /&gt;
* [[HE-KNOWS - An electronic nose]]&lt;br /&gt;
&lt;br /&gt;
==== [[Computer Vision and Image Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Automated extraction of laser streaks and range profiles]]&lt;br /&gt;
&lt;br /&gt;
* [[Data collection for mutual calibration|Data collection for laser-rangefinder and camera calibration]]&lt;br /&gt;
&lt;br /&gt;
* [[Image retargeting by k-seam removal]]&lt;br /&gt;
&lt;br /&gt;
* [[Particle filter for object tracking]]&lt;br /&gt;
&lt;br /&gt;
* [[Template based paper like reconstruction when the edges are straight]]&lt;br /&gt;
&lt;br /&gt;
* [[Wii Remote headtracking and active projector]]&lt;br /&gt;
&lt;br /&gt;
* [[Vision module for the Milan Robocup Team]]&lt;br /&gt;
&lt;br /&gt;
* [[Long Exposure Images for Resource-constrained video surveillance]]&lt;br /&gt;
&lt;br /&gt;
* [[NonPhotorealistic rendering of speed lines]].&lt;br /&gt;
&lt;br /&gt;
* [[Restoration of blurred objects using cues from the alpha matte]]&lt;br /&gt;
&lt;br /&gt;
* [[Analyzing Traffic Speed From a Single Night Image - Light Streaks Detection]]&lt;br /&gt;
&lt;br /&gt;
* [[Plate detection algorithm]]&lt;br /&gt;
&lt;br /&gt;
==== [[Machine Learning]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games|Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games]]&lt;br /&gt;
&lt;br /&gt;
* [[B-Smart Behaviour Sequence Modeler and Recognition tool|B-Smart Behaviour Sequence Modeler and Recognition tool]]&lt;br /&gt;
&lt;br /&gt;
==== [[Ontologies and Semantic Web]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[JOFS|JOFS, Java Owl File Storage]]&lt;br /&gt;
* [[FolksOnt|FolksOnt]]&lt;br /&gt;
* [[Extending a wiki with semantic templates]]&lt;br /&gt;
* [[GeoOntology|Geographic ontology for a semantic wiki]]&lt;br /&gt;
&lt;br /&gt;
==== [[Philosophy of Artificial Intelligence]] ====&lt;br /&gt;
----&lt;br /&gt;
==== [[Robotics]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[LURCH - The autonomous wheelchair]]&lt;br /&gt;
&lt;br /&gt;
* [[Rawseeds|RAWSEEDS]]&lt;br /&gt;
&lt;br /&gt;
* [[Balancing robots: Tilty, TiltOne]]&lt;br /&gt;
&lt;br /&gt;
* [[ROBOWII ]]&lt;br /&gt;
&lt;br /&gt;
* [[PoliManus]]&lt;br /&gt;
&lt;br /&gt;
* [[ZOIDBERG - An autonomous bio-inspired RoboFish]]&lt;br /&gt;
&lt;br /&gt;
* [[Styx The 6 Whegs Robot]]&lt;br /&gt;
&lt;br /&gt;
* [[PolyGlove: a body-based haptic interface]]&lt;br /&gt;
&lt;br /&gt;
* [[ULISSE]]&lt;br /&gt;
&lt;br /&gt;
* [[PEKeB: a PiezoElectric KeyBoard]]&lt;br /&gt;
&lt;br /&gt;
* [[ Brake Padal Implementing on a Golf Cart ]]&lt;br /&gt;
&lt;br /&gt;
==== [[Soft Computing]] ====	&lt;br /&gt;
----			&lt;br /&gt;
	 		&lt;br /&gt;
== Note for students ==	&lt;br /&gt;
&lt;br /&gt;
If you are a student and there isn't a '''page describing your project''', this is because YOU have the task of creating it and populating it with (meaningful) content. If you are a student and there IS a page describing your project, you have the task to complete that page with (useful and comprehensive) information about your own contribution to the project. Be aware that the quality of your work (or lack of it) on the AIRWiki will be evaluated by the Teachers and will influence your grades.&lt;br /&gt;
&lt;br /&gt;
Instructions to add a new project or to add content to an existing project page are available at [[Projects - HOWTO]].&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3653</id>
		<title>Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3653"/>
				<updated>2008-06-22T14:40:20Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page is a repository of links to the pages describing the '''projects''' we are currently working on at AIRLab. &lt;br /&gt;
See the list of our finished projects on the [[Finished Projects]] page.''&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
''by research area (areas are defined in the [[Main Page]]); for each project a name and a link to its AIRWiki page is given''&lt;br /&gt;
&lt;br /&gt;
==== [[Agents, Multiagent Systems, Agencies]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Multiagent cooperation|Multiagent cooperating system]]&lt;br /&gt;
&lt;br /&gt;
* [[Planning in Ambient Intelligence scenarios| Planning in Ambient Intelligence scenarios]]&lt;br /&gt;
&lt;br /&gt;
==== [[BioSignal Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
====== [[Affective Computing]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Relatioship between Cognition and Emotion in Rehabilitation Robotics]]&lt;br /&gt;
* [[Driving companions]]&lt;br /&gt;
* [[Emotion from Interaction]]&lt;br /&gt;
* [[Affective Devices]]&lt;br /&gt;
&lt;br /&gt;
====== [[Brain-Computer Interface]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]]&lt;br /&gt;
* [[Mu and beta rhythm-based BCI]]&lt;br /&gt;
&lt;br /&gt;
====== [[Automatic Detection Of Sleep Stages]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Sleep Staging with HMM]]&lt;br /&gt;
&lt;br /&gt;
====== [[Analysis of the Olfactory Signal]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]]&lt;br /&gt;
* [[HE-KNOWS - An electronic nose]]&lt;br /&gt;
&lt;br /&gt;
==== [[Computer Vision and Image Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Automated extraction of laser streaks and range profiles]]&lt;br /&gt;
&lt;br /&gt;
* [[Data collection for mutual calibration|Data collection for laser-rangefinder and camera calibration]]&lt;br /&gt;
&lt;br /&gt;
* [[Image retargeting by k-seam removal]]&lt;br /&gt;
&lt;br /&gt;
* [[Particle filter for object tracking]]&lt;br /&gt;
&lt;br /&gt;
* [[Template based paper like reconstruction when the edges are straight]]&lt;br /&gt;
&lt;br /&gt;
* [[Wii Remote headtracking and active projector]]&lt;br /&gt;
&lt;br /&gt;
* [[Vision module for the Milan Robocup Team]]&lt;br /&gt;
&lt;br /&gt;
* [[Long Exposure Images for Resource-constrained video surveillance]]&lt;br /&gt;
&lt;br /&gt;
* [[NonPhotorealistic rendering of speed lines]].&lt;br /&gt;
&lt;br /&gt;
* [[Restoration of blurred objects using cues from the alpha matte]]&lt;br /&gt;
&lt;br /&gt;
* [[Analyzing Traffic Speed From a Single Night Image - Light Streaks Detection]]&lt;br /&gt;
&lt;br /&gt;
* [[Plate detection algorithm]]&lt;br /&gt;
&lt;br /&gt;
==== [[Machine Learning]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games|Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games]]&lt;br /&gt;
&lt;br /&gt;
* [[B-Smart Behaviour Sequence Modeler and Recognition tool|B-Smart Behaviour Sequence Modeler and Recognition tool]]&lt;br /&gt;
&lt;br /&gt;
==== [[Ontologies and Semantic Web]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[JOFS|JOFS, Java Owl File Storage]]&lt;br /&gt;
* [[FolksOnt|FolksOnt]]&lt;br /&gt;
* [[Extending a wiki with semantic templates]]&lt;br /&gt;
* [[GeoOntology|Geographic ontology for a semantic wiki]]&lt;br /&gt;
&lt;br /&gt;
==== [[Philosophy of Artificial Intelligence]] ====&lt;br /&gt;
----&lt;br /&gt;
==== [[Robotics]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[LURCH - The autonomous wheelchair]]&lt;br /&gt;
&lt;br /&gt;
* [[Rawseeds|RAWSEEDS]]&lt;br /&gt;
&lt;br /&gt;
* [[Balancing robots: Tilty, TiltOne]]&lt;br /&gt;
&lt;br /&gt;
* [[ROBOWII ]]&lt;br /&gt;
&lt;br /&gt;
* [[PoliManus]]&lt;br /&gt;
&lt;br /&gt;
* [[ZOIDBERG - An autonomous bio-inspired RoboFish]]&lt;br /&gt;
&lt;br /&gt;
* [[Styx The 6 Whegs Robot]]&lt;br /&gt;
&lt;br /&gt;
* [[PolyGlove: a body-based haptic interface]]&lt;br /&gt;
&lt;br /&gt;
* [[ULISSE]]&lt;br /&gt;
&lt;br /&gt;
* [[PEKeB: a PiezoElectric KeyBoard]]&lt;br /&gt;
&lt;br /&gt;
* [[ Brake Padal Implementing on a Golf Cart ]]&lt;br /&gt;
&lt;br /&gt;
==== [[Soft Computing]] ====	&lt;br /&gt;
----			&lt;br /&gt;
	 		&lt;br /&gt;
== Note for students ==	&lt;br /&gt;
----	 	&lt;br /&gt;
If you are a student and there isn't a '''page describing your project''', this is because YOU have the task of creating it and populating it with (meaningful) content. If you are a student and there IS a page describing your project, you have the task to complete that page with (useful and comprehensive) information about your own contribution to the project. Be aware that the quality of your work (or lack of it) on the AIRWiki will be evaluated by the Teachers and will influence your grades	&lt;br /&gt;
Instructions to add a new project or to add content to an existing project page are available at [[Projects - HOWTO]].&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3652</id>
		<title>Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3652"/>
				<updated>2008-06-22T14:39:07Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page is a repository of links to the pages describing the '''projects''' we are currently working on at AIRLab. &lt;br /&gt;
See the list of our finished projects on the [[Finished Projects]] page.''&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
''by research area (areas are defined in the [[Main Page]]); for each project a name and a link to its AIRWiki page is given''&lt;br /&gt;
&lt;br /&gt;
==== [[Agents, Multiagent Systems, Agencies]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Multiagent cooperation|Multiagent cooperating system]]&lt;br /&gt;
&lt;br /&gt;
* [[Planning in Ambient Intelligence scenarios| Planning in Ambient Intelligence scenarios]]&lt;br /&gt;
&lt;br /&gt;
==== [[BioSignal Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
====== [[Affective Computing]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Relatioship between Cognition and Emotion in Rehabilitation Robotics]]&lt;br /&gt;
* [[Driving companions]]&lt;br /&gt;
* [[Emotion from Interaction]]&lt;br /&gt;
* [[Affective Devices]]&lt;br /&gt;
&lt;br /&gt;
====== [[Brain-Computer Interface]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]]&lt;br /&gt;
* [[Mu and beta rhythm-based BCI]]&lt;br /&gt;
&lt;br /&gt;
====== [[Automatic Detection Of Sleep Stages]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Sleep Staging with HMM]]&lt;br /&gt;
&lt;br /&gt;
====== [[Analysis of the Olfactory Signal]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]]&lt;br /&gt;
* [[HE-KNOWS - An electronic nose]]&lt;br /&gt;
&lt;br /&gt;
==== [[Computer Vision and Image Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Automated extraction of laser streaks and range profiles]]&lt;br /&gt;
&lt;br /&gt;
* [[Data collection for mutual calibration|Data collection for laser-rangefinder and camera calibration]]&lt;br /&gt;
&lt;br /&gt;
* [[Image retargeting by k-seam removal]]&lt;br /&gt;
&lt;br /&gt;
* [[Particle filter for object tracking]]&lt;br /&gt;
&lt;br /&gt;
* [[Template based paper like reconstruction when the edges are straight]]&lt;br /&gt;
&lt;br /&gt;
* [[Wii Remote headtracking and active projector]]&lt;br /&gt;
&lt;br /&gt;
* [[Vision module for the Milan Robocup Team]]&lt;br /&gt;
&lt;br /&gt;
* [[Long Exposure Images for Resource-constrained video surveillance]]&lt;br /&gt;
&lt;br /&gt;
* [[NonPhotorealistic rendering of speed lines]].&lt;br /&gt;
&lt;br /&gt;
* [[Restoration of blurred objects using cues from the alpha matte]]&lt;br /&gt;
&lt;br /&gt;
* [[Analyzing Traffic Speed From a Single Night Image - Light Streaks Detection]]&lt;br /&gt;
&lt;br /&gt;
* [[Plate detection algorithm]]&lt;br /&gt;
&lt;br /&gt;
==== [[Machine Learning]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games|Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games]]&lt;br /&gt;
&lt;br /&gt;
* [[B-Smart Behaviour Sequence Modeler and Recognition tool|B-Smart Behaviour Sequence Modeler and Recognition tool]]&lt;br /&gt;
&lt;br /&gt;
==== [[Ontologies and Semantic Web]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[JOFS|JOFS, Java Owl File Storage]]&lt;br /&gt;
* [[FolksOnt|FolksOnt]]&lt;br /&gt;
* [[Extending a wiki with semantic templates]]&lt;br /&gt;
* [[GeoOntology|Geographic ontology for a semantic wiki]]&lt;br /&gt;
&lt;br /&gt;
==== [[Philosophy of Artificial Intelligence]] ====&lt;br /&gt;
----&lt;br /&gt;
==== [[Robotics]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[LURCH - The autonomous wheelchair]]&lt;br /&gt;
&lt;br /&gt;
* [[Rawseeds|RAWSEEDS]]&lt;br /&gt;
&lt;br /&gt;
* [[Balancing robots: Tilty, TiltOne]]&lt;br /&gt;
&lt;br /&gt;
* [[ROBOWII ]]&lt;br /&gt;
&lt;br /&gt;
* [[PoliManus]]&lt;br /&gt;
&lt;br /&gt;
* [[ZOIDBERG - An autonomous bio-inspired RoboFish]]&lt;br /&gt;
&lt;br /&gt;
* [[Styx The 6 Whegs Robot]]&lt;br /&gt;
&lt;br /&gt;
* [[PolyGlove: a body-based haptic interface]]&lt;br /&gt;
&lt;br /&gt;
* [[ULISSE]]&lt;br /&gt;
&lt;br /&gt;
* [[PEKeB: a PiezoElectric KeyBoard]]&lt;br /&gt;
&lt;br /&gt;
* [[ Brake Padal Implementing on a Golf Cart ]]&lt;br /&gt;
&lt;br /&gt;
==== [[Soft Computing]] ====	&lt;br /&gt;
----			&lt;br /&gt;
	 		&lt;br /&gt;
== Note for students ==		 	&lt;br /&gt;
If you are a student and there isn't a '''page describing your project''', this is because YOU have the task of creating it and populating it with (meaningful) content. If you are a student and there IS a page describing your project, you have the task to complete that page with (useful and comprehensive) information about your own contribution to the project. Be aware that the quality of your work (or lack of it) on the AIRWiki will be evaluated by the Teachers and will influence your grades	&lt;br /&gt;
Instructions to add a new project or to add content to an existing project page are available at [[Projects - HOWTO]].&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3651</id>
		<title>Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3651"/>
				<updated>2008-06-22T14:38:38Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page is a repository of links to the pages describing the '''projects''' we are currently working on at AIRLab. &lt;br /&gt;
See the list of our finished projects on the [[Finished Projects]] page.''&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
''by research area (areas are defined in the [[Main Page]]); for each project a name and a link to its AIRWiki page is given''&lt;br /&gt;
&lt;br /&gt;
==== [[Agents, Multiagent Systems, Agencies]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Multiagent cooperation|Multiagent cooperating system]]&lt;br /&gt;
&lt;br /&gt;
* [[Planning in Ambient Intelligence scenarios| Planning in Ambient Intelligence scenarios]]&lt;br /&gt;
&lt;br /&gt;
==== [[BioSignal Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
====== [[Affective Computing]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Relatioship between Cognition and Emotion in Rehabilitation Robotics]]&lt;br /&gt;
* [[Driving companions]]&lt;br /&gt;
* [[Emotion from Interaction]]&lt;br /&gt;
* [[Affective Devices]]&lt;br /&gt;
&lt;br /&gt;
====== [[Brain-Computer Interface]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]]&lt;br /&gt;
* [[Mu and beta rhythm-based BCI]]&lt;br /&gt;
&lt;br /&gt;
====== [[Automatic Detection Of Sleep Stages]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Sleep Staging with HMM]]&lt;br /&gt;
&lt;br /&gt;
====== [[Analysis of the Olfactory Signal]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]]&lt;br /&gt;
* [[HE-KNOWS - An electronic nose]]&lt;br /&gt;
&lt;br /&gt;
==== [[Computer Vision and Image Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Automated extraction of laser streaks and range profiles]]&lt;br /&gt;
&lt;br /&gt;
* [[Data collection for mutual calibration|Data collection for laser-rangefinder and camera calibration]]&lt;br /&gt;
&lt;br /&gt;
* [[Image retargeting by k-seam removal]]&lt;br /&gt;
&lt;br /&gt;
* [[Particle filter for object tracking]]&lt;br /&gt;
&lt;br /&gt;
* [[Template based paper like reconstruction when the edges are straight]]&lt;br /&gt;
&lt;br /&gt;
* [[Wii Remote headtracking and active projector]]&lt;br /&gt;
&lt;br /&gt;
* [[Vision module for the Milan Robocup Team]]&lt;br /&gt;
&lt;br /&gt;
* [[Long Exposure Images for Resource-constrained video surveillance]]&lt;br /&gt;
&lt;br /&gt;
* [[NonPhotorealistic rendering of speed lines]].&lt;br /&gt;
&lt;br /&gt;
* [[Restoration of blurred objects using cues from the alpha matte]]&lt;br /&gt;
&lt;br /&gt;
* [[Analyzing Traffic Speed From a Single Night Image - Light Streaks Detection]]&lt;br /&gt;
&lt;br /&gt;
* [[Plate detection algorithm]]&lt;br /&gt;
&lt;br /&gt;
==== [[Machine Learning]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games|Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games]]&lt;br /&gt;
&lt;br /&gt;
* [[B-Smart Behaviour Sequence Modeler and Recognition tool|B-Smart Behaviour Sequence Modeler and Recognition tool]]&lt;br /&gt;
&lt;br /&gt;
==== [[Ontologies and Semantic Web]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[JOFS|JOFS, Java Owl File Storage]]&lt;br /&gt;
* [[FolksOnt|FolksOnt]]&lt;br /&gt;
* [[Extending a wiki with semantic templates]]&lt;br /&gt;
* [[GeoOntology|Geographic ontology for a semantic wiki]]&lt;br /&gt;
&lt;br /&gt;
==== [[Philosophy of Artificial Intelligence]] ====&lt;br /&gt;
----&lt;br /&gt;
==== [[Robotics]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[LURCH - The autonomous wheelchair]]&lt;br /&gt;
&lt;br /&gt;
* [[Rawseeds|RAWSEEDS]]&lt;br /&gt;
&lt;br /&gt;
* [[Balancing robots: Tilty, TiltOne]]&lt;br /&gt;
&lt;br /&gt;
* [[ROBOWII ]]&lt;br /&gt;
&lt;br /&gt;
* [[PoliManus]]&lt;br /&gt;
&lt;br /&gt;
* [[ZOIDBERG - An autonomous bio-inspired RoboFish]]&lt;br /&gt;
&lt;br /&gt;
* [[Styx The 6 Whegs Robot]]&lt;br /&gt;
&lt;br /&gt;
* [[PolyGlove: a body-based haptic interface]]&lt;br /&gt;
&lt;br /&gt;
* [[ULISSE]]&lt;br /&gt;
&lt;br /&gt;
* [[PEKeB: a PiezoElectric KeyBoard]]&lt;br /&gt;
&lt;br /&gt;
* [[ Brake Padal Implementing on a Golf Cart ]]&lt;br /&gt;
&lt;br /&gt;
==== [[Soft Computing]] ====	&lt;br /&gt;
	----			&lt;br /&gt;
	 		&lt;br /&gt;
== Note for students ==		 	&lt;br /&gt;
If you are a student and there isn't a '''page describing your project''', this is because YOU have the task of creating it and populating it with (meaningful) content. If you are a student and there IS a page describing your project, you have the task to complete that page with (useful and comprehensive) information about your own contribution to the project. Be aware that the quality of your work (or lack of it) on the AIRWiki will be evaluated by the Teachers and will influence your grades	&lt;br /&gt;
Instructions to add a new project or to add content to an existing project page are available at [[Projects - HOWTO]].&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Brake_Padal_Implementing_on_a_Golf_Cart&amp;diff=3650</id>
		<title>Brake Padal Implementing on a Golf Cart</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Brake_Padal_Implementing_on_a_Golf_Cart&amp;diff=3650"/>
				<updated>2008-06-22T14:36:13Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Training */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;br /&gt;
==== [[ Brake Padal Implementing on a Golf Cart ]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
== Project name ==&lt;br /&gt;
Implementing a Brake Padal actuator on a Golf Cart&lt;br /&gt;
&lt;br /&gt;
== Project short description ==&lt;br /&gt;
&lt;br /&gt;
This project is aimed at implementing a system to push del Brake pedal of a Golf Cart. The command about when and how brake is received to a PIC sending by high level Comman Unit (Notebook).&lt;br /&gt;
First Step:  &lt;br /&gt;
Design a Elctro-meccanical System than supply the right power (within a specific thime) to&lt;br /&gt;
Desigh a system that permitt to modulate the force than push the brake. &lt;br /&gt;
Second Step: &lt;br /&gt;
Implementing a Elctro-meccanical prototype&lt;br /&gt;
Third Step: &lt;br /&gt;
Implementing a full sytem&lt;br /&gt;
&lt;br /&gt;
== Dates ==&lt;br /&gt;
&lt;br /&gt;
Start date: 2008/04/15&lt;br /&gt;
&lt;br /&gt;
End date: 2010/12/31&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==People involved==&lt;br /&gt;
Davide Daloiso - [[User:DavideDaloiso]]&lt;br /&gt;
&lt;br /&gt;
== Note for students ==&lt;br /&gt;
&lt;br /&gt;
If you are a student and there isn't a '''page describing your project''', this is because YOU have the task of creating it and populating it with (meaningful) content. If you are a student and there IS a page describing your project, you have the task to complete that page with (useful and comprehensive) information about your own contribution to the project. Be aware that the quality of your work (or lack of it) on the AIRWiki will be evaluated by the Teachers and will influence your grades.&lt;br /&gt;
&lt;br /&gt;
Instructions to add a new project or to add content to an existing project page are available at [[Projects - HOWTO]].&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3649</id>
		<title>Projects</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Projects&amp;diff=3649"/>
				<updated>2008-06-22T14:36:01Z</updated>
		
		<summary type="html">&lt;p&gt;DavideMigliore: /* Robotics */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;''This page is a repository of links to the pages describing the '''projects''' we are currently working on at AIRLab. &lt;br /&gt;
See the list of our finished projects on the [[Finished Projects]] page.''&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
''by research area (areas are defined in the [[Main Page]]); for each project a name and a link to its AIRWiki page is given''&lt;br /&gt;
&lt;br /&gt;
==== [[Agents, Multiagent Systems, Agencies]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Multiagent cooperation|Multiagent cooperating system]]&lt;br /&gt;
&lt;br /&gt;
* [[Planning in Ambient Intelligence scenarios| Planning in Ambient Intelligence scenarios]]&lt;br /&gt;
&lt;br /&gt;
==== [[BioSignal Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
====== [[Affective Computing]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Relatioship between Cognition and Emotion in Rehabilitation Robotics]]&lt;br /&gt;
* [[Driving companions]]&lt;br /&gt;
* [[Emotion from Interaction]]&lt;br /&gt;
* [[Affective Devices]]&lt;br /&gt;
&lt;br /&gt;
====== [[Brain-Computer Interface]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]]&lt;br /&gt;
* [[Mu and beta rhythm-based BCI]]&lt;br /&gt;
&lt;br /&gt;
====== [[Automatic Detection Of Sleep Stages]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Sleep Staging with HMM]]&lt;br /&gt;
&lt;br /&gt;
====== [[Analysis of the Olfactory Signal]] ======&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]]&lt;br /&gt;
* [[HE-KNOWS - An electronic nose]]&lt;br /&gt;
&lt;br /&gt;
==== [[Computer Vision and Image Analysis]] ====&lt;br /&gt;
----&lt;br /&gt;
&lt;br /&gt;
* [[Automated extraction of laser streaks and range profiles]]&lt;br /&gt;
&lt;br /&gt;
* [[Data collection for mutual calibration|Data collection for laser-rangefinder and camera calibration]]&lt;br /&gt;
&lt;br /&gt;
* [[Image retargeting by k-seam removal]]&lt;br /&gt;
&lt;br /&gt;
* [[Particle filter for object tracking]]&lt;br /&gt;
&lt;br /&gt;
* [[Template based paper like reconstruction when the edges are straight]]&lt;br /&gt;
&lt;br /&gt;
* [[Wii Remote headtracking and active projector]]&lt;br /&gt;
&lt;br /&gt;
* [[Vision module for the Milan Robocup Team]]&lt;br /&gt;
&lt;br /&gt;
* [[Long Exposure Images for Resource-constrained video surveillance]]&lt;br /&gt;
&lt;br /&gt;
* [[NonPhotorealistic rendering of speed lines]].&lt;br /&gt;
&lt;br /&gt;
* [[Restoration of blurred objects using cues from the alpha matte]]&lt;br /&gt;
&lt;br /&gt;
* [[Analyzing Traffic Speed From a Single Night Image - Light Streaks Detection]]&lt;br /&gt;
&lt;br /&gt;
* [[Plate detection algorithm]]&lt;br /&gt;
&lt;br /&gt;
==== [[Machine Learning]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games|Adaptive Reinforcement Learning Multiagent Coordination in Real-Time Computer Games]]&lt;br /&gt;
&lt;br /&gt;
* [[B-Smart Behaviour Sequence Modeler and Recognition tool|B-Smart Behaviour Sequence Modeler and Recognition tool]]&lt;br /&gt;
&lt;br /&gt;
==== [[Ontologies and Semantic Web]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[JOFS|JOFS, Java Owl File Storage]]&lt;br /&gt;
* [[FolksOnt|FolksOnt]]&lt;br /&gt;
* [[Extending a wiki with semantic templates]]&lt;br /&gt;
* [[GeoOntology|Geographic ontology for a semantic wiki]]&lt;br /&gt;
&lt;br /&gt;
==== [[Philosophy of Artificial Intelligence]] ====&lt;br /&gt;
----&lt;br /&gt;
==== [[Robotics]] ====&lt;br /&gt;
----&lt;br /&gt;
* [[LURCH - The autonomous wheelchair]]&lt;br /&gt;
&lt;br /&gt;
* [[Rawseeds|RAWSEEDS]]&lt;br /&gt;
&lt;br /&gt;
* [[Balancing robots: Tilty, TiltOne]]&lt;br /&gt;
&lt;br /&gt;
* [[ROBOWII ]]&lt;br /&gt;
&lt;br /&gt;
* [[PoliManus]]&lt;br /&gt;
&lt;br /&gt;
* [[ZOIDBERG - An autonomous bio-inspired RoboFish]]&lt;br /&gt;
&lt;br /&gt;
* [[Styx The 6 Whegs Robot]]&lt;br /&gt;
&lt;br /&gt;
* [[PolyGlove: a body-based haptic interface]]&lt;br /&gt;
&lt;br /&gt;
* [[ULISSE]]&lt;br /&gt;
&lt;br /&gt;
* [[PEKeB: a PiezoElectric KeyBoard]]&lt;br /&gt;
&lt;br /&gt;
* [[ Brake Padal Implementing on a Golf Cart ]]&lt;/div&gt;</summary>
		<author><name>DavideMigliore</name></author>	</entry>

	</feed>