<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://airwiki.deib.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=RossellaBlatt</id>
		<title>AIRWiki - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://airwiki.deib.polimi.it/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=RossellaBlatt"/>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php/Special:Contributions/RossellaBlatt"/>
		<updated>2026-04-10T22:16:18Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.25.6</generator>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=User:RossellaBlatt&amp;diff=6500</id>
		<title>User:RossellaBlatt</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=User:RossellaBlatt&amp;diff=6500"/>
				<updated>2009-05-20T15:12:49Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{SMWUser&lt;br /&gt;
|category=PhD&lt;br /&gt;
|firstname=Rossella&lt;br /&gt;
|lastname=Blatt&lt;br /&gt;
|email=blatt@elet.polimi.it&lt;br /&gt;
|advisor=MatteoMatteucci&lt;br /&gt;
|resarea = BioSignal Analysis&lt;br /&gt;
|projectpage = BCI based on Motor Imagery; Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
|photo=FriendshipGraduating.jpg&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== Personal info ==&lt;br /&gt;
&lt;br /&gt;
I am a PhD student in Computer Science at the Politecnico di Milano, Italy, since January 2007 in the Artificial Intelligence field (PhD Cycle: XXII). &lt;br /&gt;
&lt;br /&gt;
My main research areas regard signal classification and recognition, with particular emphasis to the [[Lung_Cancer_Detection_by_an_Electronic_Nose|Olfactory Signal Processing]] and [[BCI_based_on_Motor_Imagery|Brain Computer Interfaces]] fields.&lt;br /&gt;
&lt;br /&gt;
In October 2006 I received a Msc in Telecommunications Engineering - Signal Processing from the Politecnico di Milano. The master thesis proposed a tool to diagnose lung cancer by the analysis of the olfactory signal acquired through an electronic nose.&lt;br /&gt;
I received a Bsc from the Politecnico di Milano in Telecommunications Engineering, in March 2004, with a thesis titled “Genomic Signal Processing: identification of exons in DNA sequences by an analysis in the frequency domain”. &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
My Web Page in the department website: http://www.elet.polimi.it/people/blatt&lt;br /&gt;
&lt;br /&gt;
My personal Web Page: http://www.freewebs.com/rossellablatt&lt;br /&gt;
&lt;br /&gt;
If you want to contact me, my email address is:  [mailto:blatt@elet.polimi.it blatt@elet.polimi.it]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5776</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5776"/>
				<updated>2009-04-01T15:21:39Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|-&lt;br /&gt;
| Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
| Every thursday (except April 2nd) || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
| Thursday 2 April || 9:30-14:00 || [[User:BernardoDalSeno]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
| Friday 3 April || 14:00-17:00 || [[User:BernardoDalSeno]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5314</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5314"/>
				<updated>2009-02-26T12:00:46Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Analysis of the Olfactory Signal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it email]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [[Image:PAIS.pdf|Paper-PAIS2008]]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Acquisition.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Acquisition.JPG&amp;diff=5313</id>
		<title>File:Acquisition.JPG</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Acquisition.JPG&amp;diff=5313"/>
				<updated>2009-02-26T11:59:07Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:Acquisition.JPG&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:Acquisition.JPG&amp;diff=5312</id>
		<title>File:Acquisition.JPG</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:Acquisition.JPG&amp;diff=5312"/>
				<updated>2009-02-26T11:03:37Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:EnoseAcquisition.JPG&amp;diff=5311</id>
		<title>File:EnoseAcquisition.JPG</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:EnoseAcquisition.JPG&amp;diff=5311"/>
				<updated>2009-02-26T11:01:00Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5310</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5310"/>
				<updated>2009-02-26T10:57:23Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5309</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5309"/>
				<updated>2009-02-26T10:56:37Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5308</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5308"/>
				<updated>2009-02-26T10:55:03Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5307</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5307"/>
				<updated>2009-02-26T10:54:14Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it email]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [[Image:PAIS.pdf|Paper-PAIS2008]]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=SchemaABlocchiNaso.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5306</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5306"/>
				<updated>2009-02-26T10:37:48Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5305</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5305"/>
				<updated>2009-02-26T10:36:27Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5304</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5304"/>
				<updated>2009-02-26T10:36:04Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: uploaded a new version of &amp;quot;Image:SchemaABlocchiNaso.jpg&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5303</id>
		<title>File:SchemaABlocchiNaso.jpg</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.jpg&amp;diff=5303"/>
				<updated>2009-02-26T10:27:56Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: Basic functioning of an electronic nose&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Basic functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.png&amp;diff=5302</id>
		<title>File:SchemaABlocchiNaso.png</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:SchemaABlocchiNaso.png&amp;diff=5302"/>
				<updated>2009-02-26T10:25:48Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: Block scheme of the main functioning of an electronic nose&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Block scheme of the main functioning of an electronic nose&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5301</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5301"/>
				<updated>2009-02-26T10:20:06Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it email]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [[Image:PAIS.pdf|Paper-PAIS2008]]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Enose.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Analysis_of_the_Olfactory_Signal&amp;diff=5300</id>
		<title>Analysis of the Olfactory Signal</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Analysis_of_the_Olfactory_Signal&amp;diff=5300"/>
				<updated>2009-02-26T10:15:31Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
&lt;br /&gt;
Nowadays the research on olfactory systems has become very lively, most of all because of the multitude of applications in which it has been successfully used, as the environmental field, the medical one, security (detection of explosives or drug) and so on. The olfactory signal, as well as other signals perceived through human senses, transports much more information than human beings are able to perceive; an electronic nose is an instrument &lt;br /&gt;
that allows to acquire this kind of signal. An electronic nose is composed by an array of electronic chemical sensors with partial speciﬁcity and an appropriate pattern recognition system able to recognize simple or complex odors. &lt;br /&gt;
&lt;br /&gt;
At the moment research on the analysi of the olfactory signal at the AIRLab is focusing on medical applications: clinicians have indeed always considered odor as a fundamental information for the diagnosis, according to the fundamental principle of clinical chemistry, namely the fact that every pathology changes people chemical composition, modifying the concentration of some chemicals in the human body. For this reason an electronic nose could be used to automatically analyze substances produced and emitted from the human body ﬁnding, in a rapid and non invasive way, several diseases. &lt;br /&gt;
 &lt;br /&gt;
An electronic nose consists in three principal components: &lt;br /&gt;
1) Gas Acquisition System &lt;br /&gt;
2) Pre-processing and Dimensionality Reduction &lt;br /&gt;
3) Classiﬁcation Algorithm &lt;br /&gt;
&lt;br /&gt;
The Olfactory Signal Analysis project is in the [[BioSignal_Analysis]] area.&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
&lt;br /&gt;
* [[Lung Cancer Detection by an Electronic Nose]] (Master thesis, Claudio Trameri, Mauro Verdirosa)&lt;br /&gt;
&lt;br /&gt;
== New projects ==&lt;br /&gt;
There are various proposal for students interested in projects/thesis in the field of the olfactory signal analysis:&lt;br /&gt;
&lt;br /&gt;
*[[Master Level Theses#Analysis of the Olfactory Signal|Master Level Theses]]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Analysis_of_the_Olfactory_Signal&amp;diff=5299</id>
		<title>Analysis of the Olfactory Signal</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Analysis_of_the_Olfactory_Signal&amp;diff=5299"/>
				<updated>2009-02-26T10:04:39Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5298</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5298"/>
				<updated>2009-02-26T10:02:08Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it email]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [http://www.booksonline.iospress.nl/Content/View.aspx?piid=10054]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Enose.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5297</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5297"/>
				<updated>2009-02-26T10:01:37Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it] email), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [http://www.booksonline.iospress.nl/Content/View.aspx?piid=10054]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Enose.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5296</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5296"/>
				<updated>2009-02-26T10:01:01Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt email]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [http://www.booksonline.iospress.nl/Content/View.aspx?piid=10054]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Enose.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5295</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5295"/>
				<updated>2009-02-26T10:00:11Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Analysis of the Olfactory Signal */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt email]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments: Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography : BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [http://www.booksonline.iospress.nl/Content/View.aspx?piid=10054]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|}}&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5294</id>
		<title>Master Level Theses</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Master_Level_Theses&amp;diff=5294"/>
				<updated>2009-02-26T09:38:54Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* BioSignal Analysis */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Here you can find proposals for master thesis (20 CFU for each student)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Agents, Multiagent Systems, Agencies ====--&amp;gt;&lt;br /&gt;
==== BioSignal Analysis ====&lt;br /&gt;
&lt;br /&gt;
===== Analysis of the Olfactory Signal =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Computational Intelligence techniques to analyse the olfactory signal acquired by an electronic nose for cancer diagnosis&lt;br /&gt;
|tutor=[[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini@elet.polimi.it]), [[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucci@elet.polimi.it email]), [[User:RossellaBlatt|Rossella Blatt email]] ([mailto:blatt@elet.polimi.it email])&lt;br /&gt;
|description= The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification system based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. We have already tested the use of the electronic nose as diagnostic tool for lung cancer; boosted from the very satisfactory results that we have achieved by these analysis, we want to investigate the possibility of diagnosing other types of cancer and to improve the current computation intelligence techniques.&lt;br /&gt;
The project is done in collaboration with the Istituto dei Tumori, Milano.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: BLATT R., BONARINI A, CALABRÒ E, DELLA TORRE M, MATTEUCCI M, PASTORINO U. (2008). Pattern Classification Techniques for Early Lung Cancer Diagnosis using an Electronic Nose. In: Frontiers in Artificial Intelligence and Applications. European Conference on Artificial Intelligence - Prestigious Applications of Intetelligent Systems. Patras, Greece. 21-15 luglio 2008. (vol. 178, pp. 693-697). ISBN/ISSN: 978-1-58603-891-5. IOS Press. [http://www.booksonline.iospress.nl/Content/View.aspx?piid=10054]&lt;br /&gt;
&lt;br /&gt;
|start=Anytime (a new acquisition phase will start in March)&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===== Sleep Staging =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Development of a computer-assisted CAP (Sleep cyclic alternating pattern) scoring method&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), Martin Mendez ([mailto:martin.mendez@polimi.it email]), Anna Maria Bianchi ([mailto:annamaria.bianchi@polimi.it email]), Mario Terzano (Ospedale di Parma)&lt;br /&gt;
|description=In 1985, Terzano describes for the first time the Cyclic Alternating Pattern [http://en.wikipedia.org/wiki/Cyclical_alternating_pattern] during sleep and, nowadays, CAP is widely accepted by the medical community as basic analysis of sleep. The CAP evaluation is of fundamental importance since it represents the mechanism developed by the brain evolution to monitor the inner and outer world and to assure the survival during sleep. However, visual detection of CAP in polisomnography (i.e., the standard procedure) is a slow and time-consuming process. This limiting factor generates the necessity of new computer-assisted scoring methods for fast CAP evaluation. This thesis deals with the development of a Decision Support System for CAP scoring based on features extraction at multi-system level (by statistical and signal analysis) and Pattern Recognition or Machine Learning approaches. This may allow the automatic detection of CAP sleep and could be integrated, through reinforcement learning techniques, with the corrections given by physicians.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, C/C++&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: Mario  Terzano, Liborio Parrino. ''Atlas, rules, and recording techniques for the scoring of cyclic alternating pattern (CAP) in human sleep'', Sleep Medicine 2 (2001) 537–553. [http://www.sciencedirect.com/science?_ob=ArticleURL&amp;amp;_udi=B6W6N-44DY2B4-8&amp;amp;_user=2620285&amp;amp;_coverDate=11%2F30%2F2001&amp;amp;_rdoc=1&amp;amp;_fmt=&amp;amp;_orig=search&amp;amp;_sort=d&amp;amp;view=c&amp;amp;_acct=C000058180&amp;amp;_version=1&amp;amp;_urlVersion=0&amp;amp;_userid=2620285&amp;amp;md5=aa61a060d005f23f6afed5c1fc2f1126]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=CAP_Sleep_Staging.jpg}}&lt;br /&gt;
&lt;br /&gt;
===== Brain-Computer Interface =====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Recognition of the user's focusing on the stimulation matrix&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]] stimulates the user continuously, and the detection of a P300 designates the choice of the user. When the user is not paying attention to the interface, false positives are likely. The objective of this work is to avoid this problem; the analysis of the electroencephalogram (EEG) over the visual cortex (and possibly an analysis of P300s or of other biosignals) should tell when the user is looking at the interface.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Creation of new EEG training by introduction of noise&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=A [[Brain-Computer Interface|BCI]] must be trained on the individual user in order to be effective.  This training phase require recording data in long sessions, which is time consuming and boring for the user.  The aim of this project is to develop algorithm to create new training EEG (electroencephalography) data from existing ones, so as to speed up the training phase.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000]&lt;br /&gt;
:Knowledge of C++ may be useful&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Real-time removal of ocular artifact from EEG&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [[Brain-Computer Interface|BCI]] based on electroencephalogram (EEG), one of the most important sources of noise is related to ocular movements.  Algorithms have been devised to cancel the effect of such artifacts.  The project consists in the in the implementation in real time of an existing algorithm (or one newly developed) in order to improve the performance of a BCI.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG-system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
: R.J. Croff, R.J. Barry. ''Removal of ocular artifact from the EEG: a review'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;issn=09877053&amp;amp;volume=30&amp;amp;issue=1&amp;amp;firstpage=5&amp;amp;form=html]&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=10-20&lt;br /&gt;
|image=B_bci.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Aperiodic visual stimulation in a VEP-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=[http://en.wikipedia.org/wiki/Evoked_potential#Visual_evoked_potential Visual-evoked potentials] (VEPs) are a possible way to drive the a [[Brain-Computer Interface|BCI]]. This projects aims at maximizing the discrimination between different stimuli.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:Matlab, [http://www.bci2000.org/ BCI2000], C++&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: J.R. Wolpaw et al. ''Brain-computer interfaces for communication and control'' [http://scienceserver.cilea.it/cgi-bin/sciserv.pl?collection=journals&amp;amp;journal=13882457&amp;amp;issue=v113i0006&amp;amp;article=767_bifcac&amp;amp;form=pdf&amp;amp;file=file.pdf]&lt;br /&gt;
&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=Bci_arch.png}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Driving an autonomous wheelchair with a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=This project pulls together different Airlab projects with the aim to drive an autonomous wheelchair ([[LURCH - The autonomous wheelchair|LURCH]]) with a [[Brain-Computer Interface|BCI]], through the development of key software modules.  The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, C, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:Linux&lt;br /&gt;
:EEG system&lt;br /&gt;
:Lurch wheelchair&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: R. Blatt et al. ''Brain Control of a Smart Wheelchair'' [http://www.booksonline.iospress.com/Content/View.aspx?piid=9401]&lt;br /&gt;
|start=November 2008&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=LURCH_wheelchair.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Online automatic tuning of the number of repetitions in a P300-based BCI&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:BernardoDalSeno|Bernardo Dal Seno]] ([mailto:dalseno%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description=In a [http://en.wikipedia.org/wiki/P300_(Neuroscience) P300]-based [[Brain-Computer_Interface|BCI]], (visual) stimuli are presented to the user, and the intention of the user is recognized when a P300 potential is recognized in response of the desired stimulus.  In order to improve accuracy, many stimulation rounds are usually performed before making a decision.  The exact number of repetitions depends on the user and the goodness of the classifier, but it is usually fixed a-priori.  The aim of this project is to adapt the number of repetitions to changing conditions, so as to achieve the maximum accuracy with the minimum time.&lt;br /&gt;
The work will be validated with live experiments.&lt;br /&gt;
&lt;br /&gt;
;Tools and instruments&lt;br /&gt;
:C++, [http://www.bci2000.org/ BCI2000], Matlab&lt;br /&gt;
:EEG system&lt;br /&gt;
&lt;br /&gt;
;Bibliography&lt;br /&gt;
: E. Donchin, K.M. Spencer, R. Wijesinghe. ''The Mental Prosthesis: Assessing the Speed of a P300-Based Brain-Computer Interface'' [http://www.cs.cmu.edu/~tanja/BCI/P300Speed_2000.pdf]&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1&lt;br /&gt;
|cfu=5-20&lt;br /&gt;
|image=B_p300_speller.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Computer Vision and Image Analysis ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== E-Science ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Machine Learning ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Reinforcement Learning in Poker&lt;br /&gt;
|tutor=Marcello Restelli (restelli-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=In this years, Artificial Intelligence research has shifted its attention from fully observable environments such as Chess to more challenging partially observable ones such as Poker.&lt;br /&gt;
&lt;br /&gt;
Up to this moment research in this kind of environments, which can be formalized as Partially Observable Stochastic Games, has been more from a game theoretic point of view, thus focusing on the pursue of optimality and equilibrium, with no attention to payoff maximization, which may be more interesting in many real-world contexts.&lt;br /&gt;
&lt;br /&gt;
On the other hand Reinforcement Learning techniques demonstrated to be successful in solving both fully observable problems, single and multi-agent, and single-agent partially observable ones, while lacking application to the partially observable multi-agent framework.&lt;br /&gt;
&lt;br /&gt;
This research aims at studying the solution of Partially Observable Stochastic Games, analyzing the possibility to combine the Opponent Modeling concept with the well proven Reinforcement Learning solution techniques to solve problems in this framework, adopting Poker as testbed.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20-40&lt;br /&gt;
|image=PokerPRLT.png}}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= EyeBot&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it), Alessandro Giusti (giusti-AT-elet-DOT-polimi-DOT-it), and Pierluigi Taddei (taddei-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=[http://torcs.sourceforge.net/ TORCS] is a state-of-the-art open source racing simulator that represents an ideal bechmark for machine learning techniques. We already organized two successfull competitions based on TORCS where competitors have been asked to develop a controller using their preferred machine learning techniques. So far, the controller developed for TORCS used as input only information extracted directly from the state of the game. The goal of this project is to extend the existing controller API (see [http://cig.dei.polimi.it/ here]) to use the visual information (e.g. the screenshots of the game) as input to the controllers. A successfull project will include both the development of the API and some basic imaga preprocessing to extract information from the images.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS2.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= SmarTrack&lt;br /&gt;
|tutor= Daniele Loiacono (loiacono-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The generation of customized game content for each player is an attractive direction to improve the game experience in the next-generation computer games. In this scenario, Machine Learning could play an important role to provide automatically such customized game content.&lt;br /&gt;
The goal of this project is to apply machine learning techniques for the generation of customized tracks in&lt;br /&gt;
[http://torcs.sourceforge.net/ TORCS], a state-of-the-art open source racing simulator. The project include different activities: the automatic generation of tracks, the section of relevant features to characterize a track and the analysis of an interest measure.  &lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1 to 2 &lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=TORCS3.jpg}}&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=}}&lt;br /&gt;
&lt;br /&gt;
==== Affective Computing ====&lt;br /&gt;
&lt;br /&gt;
==== Ontologies and Semantic Web ====&lt;br /&gt;
&lt;br /&gt;
{{Project template&lt;br /&gt;
|title=Automatic generation of domain ontologies&lt;br /&gt;
|tutor=[[User:MatteoMatteucci|Matteo Matteucci]] ([mailto:matteucc%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email]), [[User:AndreaBonarini|Andrea Bonarini]] ([mailto:bonarini%40%65%6c%65%74%2e%70%6f%6c%69%6d%69%2e%69%74 email])&lt;br /&gt;
|description= This thesis to be developed together with [http://www.noustat.it/ Noustat S.r.l.], who are developing research activities directed toward the optimization of knowledge management services, in collaboration with another company operating in this field. This project is aimed at removing the ontology building bottleneck, long and expensive activity that usually requires the direct collaboration of a domain expert. The possibility of automatic building the ontology, starting from a set of textual documents related to a specific domain, is expected to improve the ability to provide the knowledge management service, both by reducing the time-to-application, and by increasing the number of domains that can be covered. For this project, unsupervised learning methods will be applied in sequence, exploiting the topological properties of the ultra-metric spaces that emerge from the taxonomic structure of the concepts present in the texts, and associative methods will extend the concept network to lateral, non-hierarchical relationships.&lt;br /&gt;
&lt;br /&gt;
|start=before November 30th&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=20&lt;br /&gt;
|image=OntologyFromText.jpg}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!--==== Philosophy of Artificial Intelligence ====--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==== Robotics ====&lt;br /&gt;
{{Project template&lt;br /&gt;
|title= Robot games&lt;br /&gt;
|tutor= Andrea Bonarini (bonarini-AT-elet-DOT-polimi-DOT-it)&lt;br /&gt;
|description=The goal of this activity is to develop an interactive game with robots using commercial devices such as the WII Mote (see the [http://airwiki.elet.polimi.it/mediawiki/index.php/Robogames Robogames page])  &lt;br /&gt;
Projects are available in different areas:&lt;br /&gt;
* Design and implementation of the game on one of the available robots and extension of the robot functionalities&lt;br /&gt;
* Design and implementation of the game and a new suitable robot&lt;br /&gt;
* Evaluation of the game with users (in collaboration with [http://www.elet.polimi.it/people/garzotto Franca Garzotto])&lt;br /&gt;
&lt;br /&gt;
These projects allow to experiment with real mobile robots and real interaction devices.&lt;br /&gt;
&lt;br /&gt;
Parts of these projects can be considered as course projects. These projects can also be extended to cover course projects.&lt;br /&gt;
|start=Anytime&lt;br /&gt;
|number=1-2&lt;br /&gt;
|cfu=7.5-20&lt;br /&gt;
|image=Robowii_robot.jpg}}&lt;br /&gt;
&amp;lt;!--==== Soft Computing ====--&amp;gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5293</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5293"/>
				<updated>2009-02-25T07:33:12Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|-&lt;br /&gt;
| Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5292</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5292"/>
				<updated>2009-02-25T07:32:59Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|-&lt;br /&gt;
| Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5291</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5291"/>
				<updated>2009-02-25T07:32:08Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|&lt;br /&gt;
//Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5290</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5290"/>
				<updated>2009-02-25T07:31:55Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|&lt;br /&gt;
-Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5289</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5289"/>
				<updated>2009-02-25T07:31:48Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|&lt;br /&gt;
- Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5288</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5288"/>
				<updated>2009-02-25T07:31:38Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|&lt;br /&gt;
Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5287</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5287"/>
				<updated>2009-02-25T07:31:23Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|- Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5286</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5286"/>
				<updated>2009-02-25T07:31:09Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5285</id>
		<title>IIT-Lab</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=IIT-Lab&amp;diff=5285"/>
				<updated>2009-02-25T07:30:14Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== What is the IIT-Lab ==&lt;br /&gt;
&lt;br /&gt;
AIRLab-IITLab is dedicated to activities founded by the Italian Institute of Technology. &lt;br /&gt;
The lab hosts activities related to Brain-Computer Interfaces (BCI) and Affective Computing.&lt;br /&gt;
&lt;br /&gt;
=== Location ===&lt;br /&gt;
It is located in the Rimembranze di Lambrate building of the Department of Electronics and Information, Via Rimembranze di Lambrate, 14, Milan. &lt;br /&gt;
&lt;br /&gt;
=== Access Rules ===&lt;br /&gt;
The access to AIRLab-IITLab is reserved to registered users. If you are student and want to register, you have to fill the AIRLab registration form (to be signed by your tutor) and the security form. The key of the lab is provided to registered users by the doorkeeper at the main entrance of the Lambrate building. &lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
Please book the instrument you want to use by adding an entry to the table; the booking of an instrument implies the booking of the room.  If you want to use a different instrument at the same time of an existing booking, please contact the other person involved and check that you can share the room; alternatively, you can ask the doorkeeper for an empty room in the building.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || [[User:DonaldDuck]] || ProComp&lt;br /&gt;
|- &lt;br /&gt;
| Friday 15 April || 9:30-13:00 || [[User:MickeyMouse]] || EEG&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person !! Instrument&lt;br /&gt;
|-Every wednesday || 9:00-19:30 || [[User:RossellaBlatt]] || EEG&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Links ==&lt;br /&gt;
&lt;br /&gt;
* [[Brain-Computer Interface]] page on this Wiki&lt;br /&gt;
* [[Affective Computing]] page on this Wiki&lt;br /&gt;
* [http://www.airlab.elet.polimi.it/index.php/airlab/visitor_info/airlab_iitlab AIRLab - IITLab]&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5161</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5161"/>
				<updated>2009-02-10T16:23:23Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Useful internet links */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5160</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5160"/>
				<updated>2009-02-10T16:23:09Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Description and results of experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5159</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5159"/>
				<updated>2009-02-10T16:23:01Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Link to source code of the software written for the project */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5158</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5158"/>
				<updated>2009-02-10T16:22:49Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Photos and videos */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5157</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5157"/>
				<updated>2009-02-10T16:22:39Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Description and results of experiments */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5156</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5156"/>
				<updated>2009-02-10T16:20:50Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5155</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5155"/>
				<updated>2009-02-10T16:20:38Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Preliminary and sketches */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5154</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=5154"/>
				<updated>2009-02-10T16:20:28Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* State of the art */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (MOS sensors, in our case) and a pattern classification process based on machine learning techniques. Each sensor reacts in a different way to the analyzed substance, providing multidimensional data that can be considered as a unique olfactory blueprint of the analyzed substance. In our work, we used an array composed of six Metal Oxide Semiconductor (MOS) sensors.&lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During a first pilot study of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. Results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
The achieved satisfactory results pushed us to begin a new study, in roder to confirm the obtained promising results and to evaluate the ripetibility of our results. We analyzed 104 breath samples of 52 subjects, 22 healthy subjects and 30 subjects with primary lung cancer at different stages. The acquisition has been done inviting subjects to breath into a nalophan bag, later input into the electronic nose. In order to find the best statistical model able to discriminate between the two classes ‘healthy’ and ‘lung cancer’ subjects, and to reduce the dimensionality of the problem, we implemented a genetic algorithm (GA) that found the best combination of feature selection, feature projection and classifier. In particular, according to the feature selection issue, we considered methods based on exponential, sequential and randomized algorithms. Principal Component Analysis (PCA), Fisher’s Linear Discriminant Analysis (LDA) and Non Parametric Linear Discriminant Analysis (NPLDA) have been considered to project features into a lower dimensional space. Classification has been performed implementing several supervised pattern classification algorithms, based on different k-nearest neighbors (k-NN) approaches (classic, modified and fuzzy k-NN), on linear and quadratic discriminant classifiers and on a feed-forward artificial neural network (ANN). The best solution provided from the genetic algorithm, has been the projection of the found subset of features into a single component using the Fisher’s Linear Discriminant Analysis (LDA) and a classification based on the k-Nearest Neighbours (k-NN) method. Performing a Student’s t-test between all pair of considered models, no significative differences emerged, suggesting that all computational intelligence methods that we have applied provided satisfying results. The observed results, all validated using cross-validation, have been very satisfactory achieving an average accuracy of 96.2%, an average sensitivity of 93.3% and an average specificity of 100%, as well as very small confidence intervals. These results confirmed a previous pilot study where we achieved an average accuracy of 92.6%, sensitivity of 95.3% and specificity of 90.5% (on 58 control subjects and 43 lung cancer subjects). We also investigated the possibility of performing early diagnosis, building a model able to predict a sample belonging to a subject with primary lung cancer at stage I, compared to healthy subjects. Also in this analysis results have been excellent, achieving an average accuracy of 92.85%, an average sensitivity of 75.5% and an average specificity of 97.72%. &lt;br /&gt;
&lt;br /&gt;
The research demonstrate that an instrument as the electronic nose, combined with the appropriate artificial intelligence techniques, is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. Moreover, the instrument is completely non invasive. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== PhD Students =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Preliminary and sketches ===&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4695</id>
		<title>Cameras, lenses and mirrors</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4695"/>
				<updated>2008-11-03T12:23:07Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==IMPORTANT NOTES==&lt;br /&gt;
'''Never touch the sensor element (CCD or CMOS) of a camera with anything!''' It can very easily be scratched.&lt;br /&gt;
&lt;br /&gt;
'''Never touch the glass elements of a lens with your hands!''' The oil from human skin is harmful.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Cameras and frame grabbers==&lt;br /&gt;
===Cameras===&lt;br /&gt;
In the AIRLab you can find different kind of cameras. These are the main groups:&lt;br /&gt;
*'''Analogue cameras'''. Video output is given as an electrical signal, which needs analogue-to-digital conversion to be processed by a computer; this is done by a specific card called ''frame grabber'' or ''video capture card'' (the latter tend to be the lowest-performance items; see [[Cameras, lenses and mirrors#Frame grabbers]] for details). Analogue video is outdated for computer vision and robotics applications, due to its cost, low performance and complexity; nowadays digital camera systems (such as all the ones listed below) are always preferred.&lt;br /&gt;
*'''USB cameras'''. Usually very cheap, they are suitable for low-performance applications (i.e. those where low frame rate is needed and low image quality can be accepted). Their main advantage (along with cost) is the fact that every modern computer has USB ports. The fact that the USB standard includes 5V DC power supply lines helps simplifying camera design and use.&lt;br /&gt;
*'''FireWire cameras'''. The FireWire (or IEEE1394) bus is generally used for low-end industrial cameras, i.e. devices with technical characteristics much superior to those typical of USB cameras but low-performance according to typical machine vision standards. Industrial cameras usually give to the user a much wider control over the acquisition parameters compared to consumer cameras, and therefore they are usually preferred in robotics; their downside is the higher cost. There are different versions of IEE1394 link (see http://en.wikipedia.org/wiki/Firewire for details), with different bitrates, starting from the 400Mbit/s FireWire 400. Generally they are all considered superior to USB 2.0, even if theoretical bandwidth is lower for FireWire 400. Firewire ports can include power supply lines, but some interfaces (and in particular those on portable computers) omit them. Although the use of FireWire interfaces has expanded in recent years, they are not yet considered a standard feature for motherboards.&lt;br /&gt;
*'''GigE Vision cameras'''. GigE Vision (or Gigabit Ethernet Vision) is a rather new connection standard for machine vision, based upon the established Ethernet protocol in its Gigabit (i.e. 1000Mbps) version. It is very interesting, as complex multiple-camera systems can be easily built using existing (Gigabit) Ethernet hardware, such as cables and switches. Vision data is acquired simply through a generic Ethernet port, commonly found on motherboards or easily added. However, 100Mbps (or ''fast Ethernet'') ports are not guaranteed to work and can sustain only modest video streams; on the other hand, 1000Mbps ports are now standard on motherboards, so this will not be a problem anymore in a few years. It seems that GigE Vision is becoming the most common interface for low- to medium-performance industrial cameras.&lt;br /&gt;
*'''CameraLink cameras'''. Cameralink is a high-speed interface expressly developed for high-performance machine vision applications. It is a point-to-point link, i.e. a CameraLink connection is used to connect a single camera to a digital acquisition card (''frame grabber''). Its diffusion is limited to applications where extreme frame rates ''and'' resolutions are needed, because CameraLink gear is very expensive.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the cameras available in the AIRLab. (To be precise, it is a list of the cameras that are modern enough to be useful.) For each of them the main specifications (and a link to the full specifications) are given. Details on the different types of lens mount are given below in [[Cameras, lenses and mirrors#Lenses]]. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it.&lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are taking it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!resolution&lt;br /&gt;
!B/W, color&lt;br /&gt;
!max. frame rate&lt;br /&gt;
!sensor size&lt;br /&gt;
!interface&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!lens mount&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications and/or manuals&lt;br /&gt;
|-&lt;br /&gt;
|1628x1236&lt;br /&gt;
|B/W&lt;br /&gt;
|24fps&lt;br /&gt;
|1/1.8&amp;quot;&lt;br /&gt;
|CameraLink&lt;br /&gt;
|Hitachi&lt;br /&gt;
|KP-F200CL&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|[[media:KP-F200-Op_Manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|752x480&lt;br /&gt;
|color&lt;br /&gt;
|70fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC750C&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|659x493&lt;br /&gt;
|color&lt;br /&gt;
|90fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC650C&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|RAWSEEDS&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|1024x768&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC1020C&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|Lambrate (2/2)&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|CCIR (625 lines)&lt;br /&gt;
|B/W&lt;br /&gt;
|CCIR (50fps, interlaced)&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|analogue&lt;br /&gt;
|Sony&lt;br /&gt;
|XC-ST70CE&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|DEI (2/2)&lt;br /&gt;
|&lt;br /&gt;
|[[media:XCST70E_manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i 400 industrial&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_400_Industrial.htm&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i board camera&lt;br /&gt;
|proprietary&lt;br /&gt;
|8&lt;br /&gt;
|Lambrate (3/8), Bovisa (2/8), [[User:PaoloCalloni]] (1/8), [[User:DavideMigliore]] (1/8), [[User:CristianoAlessandro]] (1/8)&lt;br /&gt;
|RAWSEEDS (2/8), MRT (?/8)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i digital camera&lt;br /&gt;
|fixed optics (4.3mm, f2.0)&lt;br /&gt;
|4&lt;br /&gt;
|Univ. Mi-Bicocca (4/4)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_DC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480 dual sensor, 9cm baseline&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|STOC stereo-on-a-chip stereo camera&lt;br /&gt;
|C-mount, fitted with two 3.5mm, f1.6, 1/2&amp;quot; lenses&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/stoc.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|60fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|DCSG (associated with STOC)&lt;br /&gt;
|C-mount, fitted with one 3.5mm, f1.6, 1/2&amp;quot; lens&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/dcsg.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Camcorder==&lt;br /&gt;
In the AIRLab you can find also a ''memory camcorder'', i.e. a consumer camera system that records (lossy) compressed digital video onto standard SD flash memory cards. It's a [http://www.samsung.com/uk/support/download/supportDown.do?group=homeentertainment&amp;amp;type=camcorder&amp;amp;subtype=flashcamcorder&amp;amp;model_nm=VP-MX10&amp;amp;prd_ia_cd=03110800&amp;amp;disp_nm=VP-MX10&amp;amp;mType=&amp;amp;dType=D&amp;amp;vType=R Samsung - VP-MX10H] and can be used to record on video demos, lessons or talks. It is fitted with an 8GB SD card, i.e. the biggest it can be used with.&lt;br /&gt;
&lt;br /&gt;
Main features:&lt;br /&gt;
* maximum resolution 720x576 pixel (progressive scan), the same as DVD&lt;br /&gt;
* MPEG4 encoding (Mplayer for Linux, for example, plays it back perfectly)&lt;br /&gt;
* 2.7&amp;quot; 16:9 LCD Display (but recorded image format is 4:3)&lt;br /&gt;
* 34x Optical zoom (1200x with digital zoom)&lt;br /&gt;
* image stabilizer (use it if you go over, say, 4x of zoom factor)&lt;br /&gt;
* USB connection to PC (it's seen as a mass storage device)&lt;br /&gt;
* maximum usable SD card capacity 8GB&lt;br /&gt;
* 220' of video on an 8GB card in best-quality mode&lt;br /&gt;
* 120' (claimed) battery duration (with fully charged battery, during recording and if you don't play much with the zoom)&lt;br /&gt;
* unreliable battery charge indicator :-(&lt;br /&gt;
* battery discharges quickly even when the device is off, so be sure to recharge it before use&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WHERE IS THE CAMERA NOW''': at the moment, ROSSELLA BLATT has the camera at DEI. Date: November 3rd 2008&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Frame grabbers===&lt;br /&gt;
As previously said, a '''frame grabber''' is an electronic board that connects to one or more cameras, and converts the signals from the cameras into a data stream that can be elaborated by a computer. They are usually designed as expansion boards to be fitted into the computer case. Frame grabbers are necessary for ''analogue cameras'' (as they include the analogue/digital converters) or for CameraLink digital cameras (in this case the frame grabber is essentially a high speed dedicated digital interface). Other kinds of digital cameras don't need a frame grabber: this is one of the main advantages of digital cameras over analogue ones in machine vision applications, where the processing is almost always performed by computers.&lt;br /&gt;
In the AIRLab two models of frame grabber are available:&lt;br /&gt;
*a digital frame grabber from Euresys, model Expert 2, having two CameraLink inputs (http://www.euresys.com/Products/grablink/GrablinkSeries.asp). ''Notes: needs a PCI-X slot; one of the inputs is not working due to a fault.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two single-channel analogue frame grabbers from Matrox, models Meteor and Meteor Pro (http://www.matrox.com/imaging/support/old_products/home.cfm).&lt;br /&gt;
All the frame grabbers (except the one on the MO.RO.1) are currently in AIRLab/DEI. If you move one of them, please '''write it down here'''... and do it NOW!&lt;br /&gt;
&lt;br /&gt;
==Lenses==&lt;br /&gt;
Industrial cameras usually have interchangeable lenses. This allows for the choice of the lens that is more suitable to the considered application. There are two main standards for industrial camera lenses: '''C-mount''' and '''CS-mount'''. Both are screw-type mounts. CS-mount is simply a modified C-mount where the distance between the back of the lens and the sensor element (CCD or CMOS) is shorter: therefore a C-mount lens can be mounted on a CS-mount camera if an ''adapter ring'' (i.e. a distancing cylinder with suitable threads) is placed between them. It is impossible, though, to use a CS-mount lens on a C-mount camera: if you try you will almost certainly break the sensor, scratch the lens, or both. Just because a lens fits a camera, it doesn't mean it can be actually mounted on it!&lt;br /&gt;
&lt;br /&gt;
At the AIRLab we also use lenses specifically designed for Unibrain's ''board cameras'': they are very simple, with no iris, and very small. Their mounting system is an M12x0.5 metric screw thread.&lt;br /&gt;
&lt;br /&gt;
Be aware that sensor dimension (i.e. its diagonal, measured in fractions of an inch) is  ''not'' the same for all cameras. Therefore one of the key specifications for a lens is the maximum sensor dimension supported. If you use a given lens with too big a sensor, the edges of the image will be black as they lie outside the circle of the projected image. Also beware of the strange convention used for sensor diagonals, i.e. a fraction in the form A/B&amp;quot; where A and B are integer ''or non-integer'' numbers. For instance an 1/2&amp;quot; sensor is smaller than an 1/1.8&amp;quot; one.&lt;br /&gt;
The variability of sensor dimensions has another side effect: the same lens has a different angle of view if you change the sensor size. Therefore the same lens can behave as a wide-angle with a large sensor and as a telephoto with a small sensor.&lt;br /&gt;
&lt;br /&gt;
An useful guide to lenses (in Italian or English) can be found at http://www.rapitron.it/guidaob.htm.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the actual lenses available in the AIRLab. For each of them the main specifications (and a link to the maker's or vendor's page for full specifications) are given. A '?' means an unknown parameter: if you know its value or experimentally find out it when using the lens (e.g. the maximum sensor size), please ''update the table'' before the information is lost again! Lenses having 'M12x0.5' in Column 'mount type' are only usable with Unibrain's Fire-i board cameras. A 'YES' in the 'Mpixel' column indicates a so-called ''Megapixel lens'', i.e. a high quality, low-distortion lens designed for high-resolution industrial cameras (typically having large sensors); please note that some of these are specifically designed for B/W (i.e. black and white) cameras. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it. &lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are bringing it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!focal length&lt;br /&gt;
!max. aperture&lt;br /&gt;
!max. sensor size&lt;br /&gt;
!mount type&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!Mpixel&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications&lt;br /&gt;
|-&lt;br /&gt;
|3.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|LURCH&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.0mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV0420&lt;br /&gt;
|YES (B/W only)&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.8mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|M0518&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.computar.com/cctvprod/computar/mono/048.html&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR26014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|4&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (4/4)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR38014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8.5mm&lt;br /&gt;
|f1.3&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|(old model)&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR31214MCN&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|15mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV1520&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|6-15mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|12.5-75mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|2.1mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2042&lt;br /&gt;
|NO&lt;br /&gt;
|6&lt;br /&gt;
|Bovisa (1/6), Lambrate (5/6)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm, no IR filter&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2046&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2043&lt;br /&gt;
|NO&lt;br /&gt;
|3&lt;br /&gt;
|Bovisa (1/3), Lambrate (2/3)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2044&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Mirrors==&lt;br /&gt;
Much work has been done and is being done at the AIRLab on the topic of '''omnidirectional (machine) vision''' (sometimes referred to as ''omnivision''). Omnidirectional vision systems use special hardware to overcome the limitations of conventional vision systems in terms of field of view. The approach to this problem that we generally adopt is the use of conventional cameras in association with convex '''mirrors''', i.e. the capturing of the image reflected by a suitably-shaped mirror with a camera. The possibility of designing mirrors with specific geometric properties gives a very useful means to control the geometric behaviour of the whole camera+mirror system.&lt;br /&gt;
&lt;br /&gt;
TODO for someone who knows better ;-) : mirror list&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4693</id>
		<title>Cameras, lenses and mirrors</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4693"/>
				<updated>2008-11-03T10:26:33Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==IMPORTANT NOTES==&lt;br /&gt;
'''Never touch the sensor element (CCD or CMOS) of a camera with anything!''' It can very easily be scratched.&lt;br /&gt;
&lt;br /&gt;
'''Never touch the glass elements of a lens with your hands!''' The oil from human skin is harmful.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Cameras and frame grabbers==&lt;br /&gt;
===Cameras===&lt;br /&gt;
In the AIRLab you can find different kind of cameras. These are the main groups:&lt;br /&gt;
*'''Analogue cameras'''. Video output is given as an electrical signal, which needs analogue-to-digital conversion to be processed by a computer; this is done by a specific card called ''frame grabber'' or ''video capture card'' (the latter tend to be the lowest-performance items; see [[Cameras, lenses and mirrors#Frame grabbers]] for details). Analogue video is outdated for computer vision and robotics applications, due to its cost, low performance and complexity; nowadays digital camera systems (such as all the ones listed below) are always preferred.&lt;br /&gt;
*'''USB cameras'''. Usually very cheap, they are suitable for low-performance applications (i.e. those where low frame rate is needed and low image quality can be accepted). Their main advantage (along with cost) is the fact that every modern computer has USB ports. The fact that the USB standard includes 5V DC power supply lines helps simplifying camera design and use.&lt;br /&gt;
*'''FireWire cameras'''. The FireWire (or IEEE1394) bus is generally used for low-end industrial cameras, i.e. devices with technical characteristics much superior to those typical of USB cameras but low-performance according to typical machine vision standards. Industrial cameras usually give to the user a much wider control over the acquisition parameters compared to consumer cameras, and therefore they are usually preferred in robotics; their downside is the higher cost. There are different versions of IEE1394 link (see http://en.wikipedia.org/wiki/Firewire for details), with different bitrates, starting from the 400Mbit/s FireWire 400. Generally they are all considered superior to USB 2.0, even if theoretical bandwidth is lower for FireWire 400. Firewire ports can include power supply lines, but some interfaces (and in particular those on portable computers) omit them. Although the use of FireWire interfaces has expanded in recent years, they are not yet considered a standard feature for motherboards.&lt;br /&gt;
*'''GigE Vision cameras'''. GigE Vision (or Gigabit Ethernet Vision) is a rather new connection standard for machine vision, based upon the established Ethernet protocol in its Gigabit (i.e. 1000Mbps) version. It is very interesting, as complex multiple-camera systems can be easily built using existing (Gigabit) Ethernet hardware, such as cables and switches. Vision data is acquired simply through a generic Ethernet port, commonly found on motherboards or easily added. However, 100Mbps (or ''fast Ethernet'') ports are not guaranteed to work and can sustain only modest video streams; on the other hand, 1000Mbps ports are now standard on motherboards, so this will not be a problem anymore in a few years. It seems that GigE Vision is becoming the most common interface for low- to medium-performance industrial cameras.&lt;br /&gt;
*'''CameraLink cameras'''. Cameralink is a high-speed interface expressly developed for high-performance machine vision applications. It is a point-to-point link, i.e. a CameraLink connection is used to connect a single camera to a digital acquisition card (''frame grabber''). Its diffusion is limited to applications where extreme frame rates ''and'' resolutions are needed, because CameraLink gear is very expensive.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the cameras available in the AIRLab. (To be precise, it is a list of the cameras that are modern enough to be useful.) For each of them the main specifications (and a link to the full specifications) are given. Details on the different types of lens mount are given below in [[Cameras, lenses and mirrors#Lenses]]. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it.&lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are taking it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!resolution&lt;br /&gt;
!B/W, color&lt;br /&gt;
!max. frame rate&lt;br /&gt;
!sensor size&lt;br /&gt;
!interface&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!lens mount&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications and/or manuals&lt;br /&gt;
|-&lt;br /&gt;
|1628x1236&lt;br /&gt;
|B/W&lt;br /&gt;
|24fps&lt;br /&gt;
|1/1.8&amp;quot;&lt;br /&gt;
|CameraLink&lt;br /&gt;
|Hitachi&lt;br /&gt;
|KP-F200CL&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|[[media:KP-F200-Op_Manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|752x480&lt;br /&gt;
|color&lt;br /&gt;
|70fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC750C&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|659x493&lt;br /&gt;
|color&lt;br /&gt;
|90fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC650C&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|RAWSEEDS&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|1024x768&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC1020C&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|Lambrate (2/2)&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|CCIR (625 lines)&lt;br /&gt;
|B/W&lt;br /&gt;
|CCIR (50fps, interlaced)&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|analogue&lt;br /&gt;
|Sony&lt;br /&gt;
|XC-ST70CE&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|DEI (2/2)&lt;br /&gt;
|&lt;br /&gt;
|[[media:XCST70E_manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i 400 industrial&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_400_Industrial.htm&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i board camera&lt;br /&gt;
|proprietary&lt;br /&gt;
|8&lt;br /&gt;
|Lambrate (3/8), Bovisa (2/8), [[User:PaoloCalloni]] (1/8), [[User:DavideMigliore]] (1/8), [[User:CristianoAlessandro]] (1/8)&lt;br /&gt;
|RAWSEEDS (2/8), MRT (?/8)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i digital camera&lt;br /&gt;
|fixed optics (4.3mm, f2.0)&lt;br /&gt;
|4&lt;br /&gt;
|Univ. Mi-Bicocca (4/4)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_DC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480 dual sensor, 9cm baseline&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|STOC stereo-on-a-chip stereo camera&lt;br /&gt;
|C-mount, fitted with two 3.5mm, f1.6, 1/2&amp;quot; lenses&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/stoc.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|60fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|DCSG (associated with STOC)&lt;br /&gt;
|C-mount, fitted with one 3.5mm, f1.6, 1/2&amp;quot; lens&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/dcsg.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Camcorder==&lt;br /&gt;
In the AIRLab you can find also a ''memory camcorder'', i.e. a consumer camera system that records (lossy) compressed digital video onto standard SD flash memory cards. It's a [http://www.samsung.com/uk/support/download/supportDown.do?group=homeentertainment&amp;amp;type=camcorder&amp;amp;subtype=flashcamcorder&amp;amp;model_nm=VP-MX10&amp;amp;prd_ia_cd=03110800&amp;amp;disp_nm=VP-MX10&amp;amp;mType=&amp;amp;dType=D&amp;amp;vType=R Samsung - VP-MX10H] and can be used to record on video demos, lessons or talks. It is fitted with an 8GB SD card, i.e. the biggest it can be used with.&lt;br /&gt;
&lt;br /&gt;
Main features:&lt;br /&gt;
* maximum resolution 720x576 pixel (progressive scan), the same as DVD&lt;br /&gt;
* MPEG4 encoding (Mplayer for Linux, for example, plays it back perfectly)&lt;br /&gt;
* 2.7&amp;quot; 16:9 LCD Display (but recorded image format is 4:3)&lt;br /&gt;
* 34x Optical zoom (1200x with digital zoom)&lt;br /&gt;
* image stabilizer (use it if you go over, say, 4x of zoom factor)&lt;br /&gt;
* USB connection to PC (it's seen as a mass storage device)&lt;br /&gt;
* maximum usable SD card capacity 8GB&lt;br /&gt;
* 220' of video on an 8GB card in best-quality mode&lt;br /&gt;
* 120' (claimed) battery duration (with fully charged battery, during recording and if you don't play much with the zoom)&lt;br /&gt;
* unreliable battery charge indicator :-(&lt;br /&gt;
* battery discharges quickly even when the device is off, so be sure to recharge it before use&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WHERE IS THE CAMERA NOW''': at the moment, ROSSELLA BLATT has the camera. Date: November 3rd 2008&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Frame grabbers===&lt;br /&gt;
As previously said, a '''frame grabber''' is an electronic board that connects to one or more cameras, and converts the signals from the cameras into a data stream that can be elaborated by a computer. They are usually designed as expansion boards to be fitted into the computer case. Frame grabbers are necessary for ''analogue cameras'' (as they include the analogue/digital converters) or for CameraLink digital cameras (in this case the frame grabber is essentially a high speed dedicated digital interface). Other kinds of digital cameras don't need a frame grabber: this is one of the main advantages of digital cameras over analogue ones in machine vision applications, where the processing is almost always performed by computers.&lt;br /&gt;
In the AIRLab two models of frame grabber are available:&lt;br /&gt;
*a digital frame grabber from Euresys, model Expert 2, having two CameraLink inputs (http://www.euresys.com/Products/grablink/GrablinkSeries.asp). ''Notes: needs a PCI-X slot; one of the inputs is not working due to a fault.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two single-channel analogue frame grabbers from Matrox, models Meteor and Meteor Pro (http://www.matrox.com/imaging/support/old_products/home.cfm).&lt;br /&gt;
All the frame grabbers (except the one on the MO.RO.1) are currently in AIRLab/DEI. If you move one of them, please '''write it down here'''... and do it NOW!&lt;br /&gt;
&lt;br /&gt;
==Lenses==&lt;br /&gt;
Industrial cameras usually have interchangeable lenses. This allows for the choice of the lens that is more suitable to the considered application. There are two main standards for industrial camera lenses: '''C-mount''' and '''CS-mount'''. Both are screw-type mounts. CS-mount is simply a modified C-mount where the distance between the back of the lens and the sensor element (CCD or CMOS) is shorter: therefore a C-mount lens can be mounted on a CS-mount camera if an ''adapter ring'' (i.e. a distancing cylinder with suitable threads) is placed between them. It is impossible, though, to use a CS-mount lens on a C-mount camera: if you try you will almost certainly break the sensor, scratch the lens, or both. Just because a lens fits a camera, it doesn't mean it can be actually mounted on it!&lt;br /&gt;
&lt;br /&gt;
At the AIRLab we also use lenses specifically designed for Unibrain's ''board cameras'': they are very simple, with no iris, and very small. Their mounting system is an M12x0.5 metric screw thread.&lt;br /&gt;
&lt;br /&gt;
Be aware that sensor dimension (i.e. its diagonal, measured in fractions of an inch) is  ''not'' the same for all cameras. Therefore one of the key specifications for a lens is the maximum sensor dimension supported. If you use a given lens with too big a sensor, the edges of the image will be black as they lie outside the circle of the projected image. Also beware of the strange convention used for sensor diagonals, i.e. a fraction in the form A/B&amp;quot; where A and B are integer ''or non-integer'' numbers. For instance an 1/2&amp;quot; sensor is smaller than an 1/1.8&amp;quot; one.&lt;br /&gt;
The variability of sensor dimensions has another side effect: the same lens has a different angle of view if you change the sensor size. Therefore the same lens can behave as a wide-angle with a large sensor and as a telephoto with a small sensor.&lt;br /&gt;
&lt;br /&gt;
An useful guide to lenses (in Italian or English) can be found at http://www.rapitron.it/guidaob.htm.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the actual lenses available in the AIRLab. For each of them the main specifications (and a link to the maker's or vendor's page for full specifications) are given. A '?' means an unknown parameter: if you know its value or experimentally find out it when using the lens (e.g. the maximum sensor size), please ''update the table'' before the information is lost again! Lenses having 'M12x0.5' in Column 'mount type' are only usable with Unibrain's Fire-i board cameras. A 'YES' in the 'Mpixel' column indicates a so-called ''Megapixel lens'', i.e. a high quality, low-distortion lens designed for high-resolution industrial cameras (typically having large sensors); please note that some of these are specifically designed for B/W (i.e. black and white) cameras. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it. &lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are bringing it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!focal length&lt;br /&gt;
!max. aperture&lt;br /&gt;
!max. sensor size&lt;br /&gt;
!mount type&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!Mpixel&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications&lt;br /&gt;
|-&lt;br /&gt;
|3.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|LURCH&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.0mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV0420&lt;br /&gt;
|YES (B/W only)&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.8mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|M0518&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.computar.com/cctvprod/computar/mono/048.html&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR26014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|4&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (4/4)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR38014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8.5mm&lt;br /&gt;
|f1.3&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|(old model)&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR31214MCN&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|15mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV1520&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|6-15mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|12.5-75mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|2.1mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2042&lt;br /&gt;
|NO&lt;br /&gt;
|6&lt;br /&gt;
|Bovisa (1/6), Lambrate (5/6)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm, no IR filter&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2046&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2043&lt;br /&gt;
|NO&lt;br /&gt;
|3&lt;br /&gt;
|Bovisa (1/3), Lambrate (2/3)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2044&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Mirrors==&lt;br /&gt;
Much work has been done and is being done at the AIRLab on the topic of '''omnidirectional (machine) vision''' (sometimes referred to as ''omnivision''). Omnidirectional vision systems use special hardware to overcome the limitations of conventional vision systems in terms of field of view. The approach to this problem that we generally adopt is the use of conventional cameras in association with convex '''mirrors''', i.e. the capturing of the image reflected by a suitably-shaped mirror with a camera. The possibility of designing mirrors with specific geometric properties gives a very useful means to control the geometric behaviour of the whole camera+mirror system.&lt;br /&gt;
&lt;br /&gt;
TODO for someone who knows better ;-) : mirror list&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4692</id>
		<title>Cameras, lenses and mirrors</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Cameras,_lenses_and_mirrors&amp;diff=4692"/>
				<updated>2008-11-03T10:25:38Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==IMPORTANT NOTES==&lt;br /&gt;
'''Never touch the sensor element (CCD or CMOS) of a camera with anything!''' It can very easily be scratched.&lt;br /&gt;
&lt;br /&gt;
'''Never touch the glass elements of a lens with your hands!''' The oil from human skin is harmful.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Cameras and frame grabbers==&lt;br /&gt;
===Cameras===&lt;br /&gt;
In the AIRLab you can find different kind of cameras. These are the main groups:&lt;br /&gt;
*'''Analogue cameras'''. Video output is given as an electrical signal, which needs analogue-to-digital conversion to be processed by a computer; this is done by a specific card called ''frame grabber'' or ''video capture card'' (the latter tend to be the lowest-performance items; see [[Cameras, lenses and mirrors#Frame grabbers]] for details). Analogue video is outdated for computer vision and robotics applications, due to its cost, low performance and complexity; nowadays digital camera systems (such as all the ones listed below) are always preferred.&lt;br /&gt;
*'''USB cameras'''. Usually very cheap, they are suitable for low-performance applications (i.e. those where low frame rate is needed and low image quality can be accepted). Their main advantage (along with cost) is the fact that every modern computer has USB ports. The fact that the USB standard includes 5V DC power supply lines helps simplifying camera design and use.&lt;br /&gt;
*'''FireWire cameras'''. The FireWire (or IEEE1394) bus is generally used for low-end industrial cameras, i.e. devices with technical characteristics much superior to those typical of USB cameras but low-performance according to typical machine vision standards. Industrial cameras usually give to the user a much wider control over the acquisition parameters compared to consumer cameras, and therefore they are usually preferred in robotics; their downside is the higher cost. There are different versions of IEE1394 link (see http://en.wikipedia.org/wiki/Firewire for details), with different bitrates, starting from the 400Mbit/s FireWire 400. Generally they are all considered superior to USB 2.0, even if theoretical bandwidth is lower for FireWire 400. Firewire ports can include power supply lines, but some interfaces (and in particular those on portable computers) omit them. Although the use of FireWire interfaces has expanded in recent years, they are not yet considered a standard feature for motherboards.&lt;br /&gt;
*'''GigE Vision cameras'''. GigE Vision (or Gigabit Ethernet Vision) is a rather new connection standard for machine vision, based upon the established Ethernet protocol in its Gigabit (i.e. 1000Mbps) version. It is very interesting, as complex multiple-camera systems can be easily built using existing (Gigabit) Ethernet hardware, such as cables and switches. Vision data is acquired simply through a generic Ethernet port, commonly found on motherboards or easily added. However, 100Mbps (or ''fast Ethernet'') ports are not guaranteed to work and can sustain only modest video streams; on the other hand, 1000Mbps ports are now standard on motherboards, so this will not be a problem anymore in a few years. It seems that GigE Vision is becoming the most common interface for low- to medium-performance industrial cameras.&lt;br /&gt;
*'''CameraLink cameras'''. Cameralink is a high-speed interface expressly developed for high-performance machine vision applications. It is a point-to-point link, i.e. a CameraLink connection is used to connect a single camera to a digital acquisition card (''frame grabber''). Its diffusion is limited to applications where extreme frame rates ''and'' resolutions are needed, because CameraLink gear is very expensive.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the cameras available in the AIRLab. (To be precise, it is a list of the cameras that are modern enough to be useful.) For each of them the main specifications (and a link to the full specifications) are given. Details on the different types of lens mount are given below in [[Cameras, lenses and mirrors#Lenses]]. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it.&lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are taking it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!resolution&lt;br /&gt;
!B/W, color&lt;br /&gt;
!max. frame rate&lt;br /&gt;
!sensor size&lt;br /&gt;
!interface&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!lens mount&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications and/or manuals&lt;br /&gt;
|-&lt;br /&gt;
|1628x1236&lt;br /&gt;
|B/W&lt;br /&gt;
|24fps&lt;br /&gt;
|1/1.8&amp;quot;&lt;br /&gt;
|CameraLink&lt;br /&gt;
|Hitachi&lt;br /&gt;
|KP-F200CL&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|[[media:KP-F200-Op_Manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|752x480&lt;br /&gt;
|color&lt;br /&gt;
|70fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC750C&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|659x493&lt;br /&gt;
|color&lt;br /&gt;
|90fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC650C&lt;br /&gt;
|C-mount&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|RAWSEEDS&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|1024x768&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|GigE&lt;br /&gt;
|Prosilica&lt;br /&gt;
|GC1020C&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|Lambrate (2/2)&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.prosilica.com/products/gc_series.html&lt;br /&gt;
|-&lt;br /&gt;
|CCIR (625 lines)&lt;br /&gt;
|B/W&lt;br /&gt;
|CCIR (50fps, interlaced)&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|analogue&lt;br /&gt;
|Sony&lt;br /&gt;
|XC-ST70CE&lt;br /&gt;
|C-mount&lt;br /&gt;
|2&lt;br /&gt;
|DEI (2/2)&lt;br /&gt;
|&lt;br /&gt;
|[[media:XCST70E_manual.pdf]]&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i 400 industrial&lt;br /&gt;
|C-mount&lt;br /&gt;
|3&lt;br /&gt;
|Lambrate (3/3)&lt;br /&gt;
|RAWSEEDS (3/3)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_400_Industrial.htm&lt;br /&gt;
|-&lt;br /&gt;
|659x494&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i board camera&lt;br /&gt;
|proprietary&lt;br /&gt;
|8&lt;br /&gt;
|Lambrate (3/8), Bovisa (2/8), [[User:PaoloCalloni]] (1/8), [[User:DavideMigliore]] (1/8), [[User:CristianoAlessandro]] (1/8)&lt;br /&gt;
|RAWSEEDS (2/8), MRT (?/8)&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Unibrain&lt;br /&gt;
|Fire-i digital camera&lt;br /&gt;
|fixed optics (4.3mm, f2.0)&lt;br /&gt;
|4&lt;br /&gt;
|Univ. Mi-Bicocca (4/4)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_DC.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480 dual sensor, 9cm baseline&lt;br /&gt;
|color&lt;br /&gt;
|30fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|STOC stereo-on-a-chip stereo camera&lt;br /&gt;
|C-mount, fitted with two 3.5mm, f1.6, 1/2&amp;quot; lenses&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/stoc.htm&lt;br /&gt;
|-&lt;br /&gt;
|640x480&lt;br /&gt;
|color&lt;br /&gt;
|60fps&lt;br /&gt;
|1/3&amp;quot;&lt;br /&gt;
|FireWire 400&lt;br /&gt;
|Videre Design&lt;br /&gt;
|DCSG (associated with STOC)&lt;br /&gt;
|C-mount, fitted with one 3.5mm, f1.6, 1/2&amp;quot; lens&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|http://www.videredesign.com/vision/dcsg.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Camcorder==&lt;br /&gt;
In the AIRLab you can find also a ''memory camcorder'', i.e. a consumer camera system that records (lossy) compressed digital video onto standard SD flash memory cards. It's a [http://www.samsung.com/uk/support/download/supportDown.do?group=homeentertainment&amp;amp;type=camcorder&amp;amp;subtype=flashcamcorder&amp;amp;model_nm=VP-MX10&amp;amp;prd_ia_cd=03110800&amp;amp;disp_nm=VP-MX10&amp;amp;mType=&amp;amp;dType=D&amp;amp;vType=R Samsung - VP-MX10H] and can be used to record on video demos, lessons or talks. It is fitted with an 8GB SD card, i.e. the biggest it can be used with.&lt;br /&gt;
&lt;br /&gt;
Main features:&lt;br /&gt;
* maximum resolution 720x576 pixel (progressive scan), the same as DVD&lt;br /&gt;
* MPEG4 encoding (Mplayer for Linux, for example, plays it back perfectly)&lt;br /&gt;
* 2.7&amp;quot; 16:9 LCD Display (but recorded image format is 4:3)&lt;br /&gt;
* 34x Optical zoom (1200x with digital zoom)&lt;br /&gt;
* image stabilizer (use it if you go over, say, 4x of zoom factor)&lt;br /&gt;
* USB connection to PC (it's seen as a mass storage device)&lt;br /&gt;
* maximum usable SD card capacity 8GB&lt;br /&gt;
* 220' of video on an 8GB card in best-quality mode&lt;br /&gt;
* 120' (claimed) battery duration (with fully charged battery, during recording and if you don't play much with the zoom)&lt;br /&gt;
* unreliable battery charge indicator :-(&lt;br /&gt;
* battery discharges quickly even when the device is off, so be sure to recharge it before use&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''WHERE IS THE CAMERA NOW''': at the moment, ROSSELLA BLATT. Date: November 3rd 2008&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
===Frame grabbers===&lt;br /&gt;
As previously said, a '''frame grabber''' is an electronic board that connects to one or more cameras, and converts the signals from the cameras into a data stream that can be elaborated by a computer. They are usually designed as expansion boards to be fitted into the computer case. Frame grabbers are necessary for ''analogue cameras'' (as they include the analogue/digital converters) or for CameraLink digital cameras (in this case the frame grabber is essentially a high speed dedicated digital interface). Other kinds of digital cameras don't need a frame grabber: this is one of the main advantages of digital cameras over analogue ones in machine vision applications, where the processing is almost always performed by computers.&lt;br /&gt;
In the AIRLab two models of frame grabber are available:&lt;br /&gt;
*a digital frame grabber from Euresys, model Expert 2, having two CameraLink inputs (http://www.euresys.com/Products/grablink/GrablinkSeries.asp). ''Notes: needs a PCI-X slot; one of the inputs is not working due to a fault.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two multichannel analogue frame grabbers from Matrox, model Meteor II/Multi-Channel, having three analogue inputs that can be combined into a single three-channel RGB analogue input (http://www.matrox.com/imaging/support/old_products/home.cfm). ''Note: one item is permanently mounted on the MO.RO.1 robot: see [[The MO.RO. family]] for details.''&lt;br /&gt;
*two single-channel analogue frame grabbers from Matrox, models Meteor and Meteor Pro (http://www.matrox.com/imaging/support/old_products/home.cfm).&lt;br /&gt;
All the frame grabbers (except the one on the MO.RO.1) are currently in AIRLab/DEI. If you move one of them, please '''write it down here'''... and do it NOW!&lt;br /&gt;
&lt;br /&gt;
==Lenses==&lt;br /&gt;
Industrial cameras usually have interchangeable lenses. This allows for the choice of the lens that is more suitable to the considered application. There are two main standards for industrial camera lenses: '''C-mount''' and '''CS-mount'''. Both are screw-type mounts. CS-mount is simply a modified C-mount where the distance between the back of the lens and the sensor element (CCD or CMOS) is shorter: therefore a C-mount lens can be mounted on a CS-mount camera if an ''adapter ring'' (i.e. a distancing cylinder with suitable threads) is placed between them. It is impossible, though, to use a CS-mount lens on a C-mount camera: if you try you will almost certainly break the sensor, scratch the lens, or both. Just because a lens fits a camera, it doesn't mean it can be actually mounted on it!&lt;br /&gt;
&lt;br /&gt;
At the AIRLab we also use lenses specifically designed for Unibrain's ''board cameras'': they are very simple, with no iris, and very small. Their mounting system is an M12x0.5 metric screw thread.&lt;br /&gt;
&lt;br /&gt;
Be aware that sensor dimension (i.e. its diagonal, measured in fractions of an inch) is  ''not'' the same for all cameras. Therefore one of the key specifications for a lens is the maximum sensor dimension supported. If you use a given lens with too big a sensor, the edges of the image will be black as they lie outside the circle of the projected image. Also beware of the strange convention used for sensor diagonals, i.e. a fraction in the form A/B&amp;quot; where A and B are integer ''or non-integer'' numbers. For instance an 1/2&amp;quot; sensor is smaller than an 1/1.8&amp;quot; one.&lt;br /&gt;
The variability of sensor dimensions has another side effect: the same lens has a different angle of view if you change the sensor size. Therefore the same lens can behave as a wide-angle with a large sensor and as a telephoto with a small sensor.&lt;br /&gt;
&lt;br /&gt;
An useful guide to lenses (in Italian or English) can be found at http://www.rapitron.it/guidaob.htm.&lt;br /&gt;
&lt;br /&gt;
The following is a list of the actual lenses available in the AIRLab. For each of them the main specifications (and a link to the maker's or vendor's page for full specifications) are given. A '?' means an unknown parameter: if you know its value or experimentally find out it when using the lens (e.g. the maximum sensor size), please ''update the table'' before the information is lost again! Lenses having 'M12x0.5' in Column 'mount type' are only usable with Unibrain's Fire-i board cameras. A 'YES' in the 'Mpixel' column indicates a so-called ''Megapixel lens'', i.e. a high quality, low-distortion lens designed for high-resolution industrial cameras (typically having large sensors); please note that some of these are specifically designed for B/W (i.e. black and white) cameras. The 'how many?' field tells if multiple, identical items are available. Finally, the 'where?' field tells you in which of the AIRLab sites (listed in [[The Labs]]) you can find an item, and the 'project' field is used to specify which project (if any) is using it. &lt;br /&gt;
&lt;br /&gt;
Ah, one last thing. People like to actually ''find'' things when they look for them, so '''don't forget to update the table when you move something away from its current location'''. If you don't know where you are bringing it, just put your name in the table.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellpadding=&amp;quot;5&amp;quot; cellspacing=&amp;quot;0&amp;quot;&lt;br /&gt;
!focal length&lt;br /&gt;
!max. aperture&lt;br /&gt;
!max. sensor size&lt;br /&gt;
!mount type&lt;br /&gt;
!maker&lt;br /&gt;
!model&lt;br /&gt;
!Mpixel&lt;br /&gt;
!how many?&lt;br /&gt;
!where?&lt;br /&gt;
!project&lt;br /&gt;
!link to full specifications&lt;br /&gt;
|-&lt;br /&gt;
|3.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|LURCH&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.0mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV0420&lt;br /&gt;
|YES (B/W only)&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.5mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|4.8mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|M0518&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.computar.com/cctvprod/computar/mono/048.html&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|6mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|1/2&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR26014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|4&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (4/4)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR38014MCN&lt;br /&gt;
|YES&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|RAWSEEDS (2/2)&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|8.5mm&lt;br /&gt;
|f1.3&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Computar&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|(old model)&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|2&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|&lt;br /&gt;
|-&lt;br /&gt;
|12mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Goyo&lt;br /&gt;
|GMHR31214MCN&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.goyooptical.com/products/industrial/hrmegapixel.html&lt;br /&gt;
|-&lt;br /&gt;
|15mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|2/3&amp;quot;&lt;br /&gt;
|C-mount&lt;br /&gt;
|Microtron&lt;br /&gt;
|FV1520&lt;br /&gt;
|YES&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|http://www.rapitron.it/obmegpxman1.htm&lt;br /&gt;
|-&lt;br /&gt;
|6-15mm&lt;br /&gt;
|f1.4&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|12.5-75mm&lt;br /&gt;
|f1.8&lt;br /&gt;
|?&lt;br /&gt;
|C-mount&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|?&lt;br /&gt;
|1&lt;br /&gt;
|DEI&lt;br /&gt;
|&lt;br /&gt;
|?&lt;br /&gt;
|-&lt;br /&gt;
|2.1mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2042&lt;br /&gt;
|NO&lt;br /&gt;
|6&lt;br /&gt;
|Bovisa (1/6), Lambrate (5/6)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm, no IR filter&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2046&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|4.3mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2043&lt;br /&gt;
|NO&lt;br /&gt;
|3&lt;br /&gt;
|Bovisa (1/3), Lambrate (2/3)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|-&lt;br /&gt;
|8mm&lt;br /&gt;
|f2.0&lt;br /&gt;
|1/4&amp;quot;&lt;br /&gt;
|M12x0.5&lt;br /&gt;
|Unibrain&lt;br /&gt;
|2044&lt;br /&gt;
|NO&lt;br /&gt;
|1&lt;br /&gt;
|Lambrate (1/1)&lt;br /&gt;
|&lt;br /&gt;
|http://www.unibrain.com/Products/VisionImg/Fire_i_BC.htm&lt;br /&gt;
|}&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Mirrors==&lt;br /&gt;
Much work has been done and is being done at the AIRLab on the topic of '''omnidirectional (machine) vision''' (sometimes referred to as ''omnivision''). Omnidirectional vision systems use special hardware to overcome the limitations of conventional vision systems in terms of field of view. The approach to this problem that we generally adopt is the use of conventional cameras in association with convex '''mirrors''', i.e. the capturing of the image reflected by a suitably-shaped mirror with a camera. The possibility of designing mirrors with specific geometric properties gives a very useful means to control the geometric behaviour of the whole camera+mirror system.&lt;br /&gt;
&lt;br /&gt;
TODO for someone who knows better ;-) : mirror list&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Electroencephalographs&amp;diff=3789</id>
		<title>Electroencephalographs</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Electroencephalographs&amp;diff=3789"/>
				<updated>2008-07-14T13:12:46Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Booking */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== BeLight ==&lt;br /&gt;
&lt;br /&gt;
A brief description of EbNeuro BeLight should go here.&lt;br /&gt;
&lt;br /&gt;
=== Booking ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Please keep the table lines ordered by time (nearest bookings first); add new entries like this:&lt;br /&gt;
---CUT---&lt;br /&gt;
| Monday 13 March || 11:00-18:00 || Donald Duck&lt;br /&gt;
|- &lt;br /&gt;
---CUT---&lt;br /&gt;
Use abbreviations, if you like.&lt;br /&gt;
Please remove old entries.&lt;br /&gt;
--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot;&lt;br /&gt;
! Day !! Time !! Person&lt;br /&gt;
|-&lt;br /&gt;
| Thursday 10 July || 10:00 - 19:00 || Bernardo&lt;br /&gt;
|-&lt;br /&gt;
| Friday 11 July || 15:00 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
| Tuesday 15 July || 9:00 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
| Thursday 17 July || 10:30 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
| Friday 18 July || 10:30 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
| Thursday 31 July || 10:30 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
| Friday 1 August || 10:30 - 19:00 || Rossella&lt;br /&gt;
|-&lt;br /&gt;
|}&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=File:PAIS.pdf&amp;diff=3544</id>
		<title>File:PAIS.pdf</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=File:PAIS.pdf&amp;diff=3544"/>
				<updated>2008-06-17T12:04:13Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3543</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3543"/>
				<updated>2008-06-17T12:03:49Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Link to project documents and files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (6 MOS sensors, in our case) and a pattern classification process based on machine learning techniques. &lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During the first phase of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. &lt;br /&gt;
&lt;br /&gt;
At the end of the first phase, results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of&lt;br /&gt;
95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
These results pushed us to begin the second phase of the project, still in progress, to investigate the possibility of early lung cancer diagnosis: we are involving a larger number of subjects, partioned in different classes according to the type and stage of the disease. The research demonstrates that the electronic nose is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== Other Politecnico di Milano people =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Matteo Della Torre (SACMI - Imola)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
=== State of the art ===&lt;br /&gt;
&lt;br /&gt;
=== Preliminary and sketches ===&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:[[Image:PAIS.pdf|Paper-PAIS2008]] &lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Image:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3542</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3542"/>
				<updated>2008-06-17T12:02:34Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Link to project documents and files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (6 MOS sensors, in our case) and a pattern classification process based on machine learning techniques. &lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During the first phase of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. &lt;br /&gt;
&lt;br /&gt;
At the end of the first phase, results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of&lt;br /&gt;
95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
These results pushed us to begin the second phase of the project, still in progress, to investigate the possibility of early lung cancer diagnosis: we are involving a larger number of subjects, partioned in different classes according to the type and stage of the disease. The research demonstrates that the electronic nose is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== Other Politecnico di Milano people =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Matteo Della Torre (SACMI - Imola)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
=== State of the art ===&lt;br /&gt;
&lt;br /&gt;
=== Preliminary and sketches ===&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:The presented paper will be soon available.&lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Special:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Special:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3541</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3541"/>
				<updated>2008-06-17T12:01:31Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Link to project documents and files */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (6 MOS sensors, in our case) and a pattern classification process based on machine learning techniques. &lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During the first phase of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. &lt;br /&gt;
&lt;br /&gt;
At the end of the first phase, results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of&lt;br /&gt;
95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
These results pushed us to begin the second phase of the project, still in progress, to investigate the possibility of early lung cancer diagnosis: we are involving a larger number of subjects, partioned in different classes according to the type and stage of the disease. The research demonstrates that the electronic nose is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== Other Politecnico di Milano people =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Matteo Della Torre (SACMI - Imola)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
=== State of the art ===&lt;br /&gt;
&lt;br /&gt;
=== Preliminary and sketches ===&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:The presented paper will be soon available.&lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Image:IJCNNfinal.pdf|Paper-IJCNN2007]] &lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Special:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3530</id>
		<title>Lung Cancer Detection by an Electronic Nose</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Lung_Cancer_Detection_by_an_Electronic_Nose&amp;diff=3530"/>
				<updated>2008-06-16T15:49:34Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: /* Project short description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Part 1: project profile''' ==&lt;br /&gt;
&lt;br /&gt;
=== Project name ===&lt;br /&gt;
&lt;br /&gt;
Lung Cancer Detection by an Electronic Nose&lt;br /&gt;
&lt;br /&gt;
=== Project short description ===&lt;br /&gt;
&lt;br /&gt;
The electronic nose is an instrument able to detect and recognize odors, that is the volatile substances in the atmosphere or emitted by the analyzed substance. This device can react to a gas substance by providing signals that can be analyzed to classify the input. It is composed of a sensor array (6 MOS sensors, in our case) and a pattern classification process based on machine learning techniques. &lt;br /&gt;
In this project, we have been using an electronic nose based on an array of six MOS sensors, to recognize the presence of lung cancer in breaths' subjects, diagnosing the disease with a non invasive and low cost method. &lt;br /&gt;
&lt;br /&gt;
During the first phase of our research, we have evaluated the possibility and accuracy of lung cancer diagnosis by classifying the&lt;br /&gt;
olfactory signal associated to exhalations of subjects. &lt;br /&gt;
&lt;br /&gt;
At the end of the first phase, results have been very satisfactory and promising: we achieved an average accuracy of 92.6%, sensitivity of&lt;br /&gt;
95.3% and specificity of 90.5%. In particular we analyzed the breath of 101 individuals, of which 58 control subjects, and 43 suffer from&lt;br /&gt;
different types of lung cancer (primary and not) at different stages.&lt;br /&gt;
In order to find the components able to discriminate between the two classes ‘healthy’ and ‘sick’ at best, and to reduce the dimensionality&lt;br /&gt;
of the problem, we have extracted the most significant features and projected them into a lower dimensional space using Non Parametric&lt;br /&gt;
Linear Discriminant Analysis. Finally, we have used these features as input to several supervised pattern classification algorithms, based&lt;br /&gt;
on different k-nearest neighbors (k-NN) approaches (classic, modified and Fuzzy k-NN), linear and quadratic discriminant classifiers&lt;br /&gt;
and on a feed-forward artificial neural network (ANN). The observed results have all been validated using cross-validation. &lt;br /&gt;
&lt;br /&gt;
These results pushed us to begin the second phase of the project, still in progress, to investigate the possibility of early lung cancer diagnosis: we are involving a larger number of subjects, partioned in different classes according to the type and stage of the disease. The research demonstrates that the electronic nose is a promising alternative to current lung cancer diagnostic techniques: the obtained predictive errors are lower than those achieved by present diagnostic methods, and the cost of the analysis, both in money, time and resources, is lower. The introduction of this technology will lead to very important social and business effects: its low price and small dimensions allow a large scale distribution, giving the opportunity to perform non invasive, cheap, quick, and massive early diagnosis and screening.&lt;br /&gt;
&lt;br /&gt;
=== Dates ===&lt;br /&gt;
Start date: 2007/01/01&lt;br /&gt;
&lt;br /&gt;
End date: --&lt;br /&gt;
&lt;br /&gt;
=== Website(s) ===&lt;br /&gt;
&lt;br /&gt;
At the moment no website avaible&lt;br /&gt;
&lt;br /&gt;
=== People involved ===&lt;br /&gt;
&lt;br /&gt;
===== Project head(s) =====&lt;br /&gt;
&lt;br /&gt;
A. Bonarini - [[User:AndreaBonarini]]&lt;br /&gt;
&lt;br /&gt;
M. Matteucci - [[User:MatteoMatteucci]]&lt;br /&gt;
&lt;br /&gt;
===== Other Politecnico di Milano people =====&lt;br /&gt;
&lt;br /&gt;
R. Blatt - [[User:RossellaBlatt]]&lt;br /&gt;
&lt;br /&gt;
===== Students currently working on the project =====&lt;br /&gt;
&lt;br /&gt;
Claudio Trameri - [[User:ClaudioTrameri]]&lt;br /&gt;
&lt;br /&gt;
Mauro Verdirosa - [[User:MauroVerdirosa]]&lt;br /&gt;
&lt;br /&gt;
===== Students who worked on the project in the past =====&lt;br /&gt;
&lt;br /&gt;
===== External personnel: =====&lt;br /&gt;
&lt;br /&gt;
Dott. Ugo Pastorino (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Elisa Calabrò (Istituto dei Tumori - Milano)&lt;br /&gt;
&lt;br /&gt;
Dott. Matteo Della Torre (SACMI - Imola)&lt;br /&gt;
&lt;br /&gt;
=== Laboratory work and risk analysis ===&lt;br /&gt;
&lt;br /&gt;
Laboratory work for this project will be mainly performed at the Istituto Nazionale dei Tumori di Milano, where the acquisistion of subjects' breath, both sick and healthy will be done. &lt;br /&gt;
For this kind of work, there are not potential risks.&lt;br /&gt;
&lt;br /&gt;
== '''Part 2: project description''' ==&lt;br /&gt;
&lt;br /&gt;
=== State of the art ===&lt;br /&gt;
&lt;br /&gt;
=== Preliminary and sketches ===&lt;br /&gt;
&lt;br /&gt;
=== Design notes and guidelines ===&lt;br /&gt;
&lt;br /&gt;
=== Link to project documents and files ===&lt;br /&gt;
&lt;br /&gt;
Results obtained from this work have been presented at different conferences:&lt;br /&gt;
&lt;br /&gt;
* '''Prestigious Applications of Intelligent Systems (PAIS 2008), Patras, Greece''' &lt;br /&gt;
:The 5th Prestigious Applications of Intelligent Systems (PAIS 2008) is a sub-conference of the 18th European Conference on Artificial Intteligence (ECAI 2008) that will be held at the University of Patras, Greece, from July 21st to 25th. &lt;br /&gt;
:The presented paper will be soon available.&lt;br /&gt;
&lt;br /&gt;
* '''International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA'''&lt;br /&gt;
:'''Lung Cancer Identification by an Electronic Nose based on array of MOS Sensors''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 2007 International Joint Conference on Neural Networks (IJCNN 2007), Orlando, FL, USA: [[Special:IJCNNfinal.pdf|Paper-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
:Short presentation of the ''Lung Cancer Identification by an Electronic Nose based on an array of MOS Sensors'' paper: [[Special:LungCancerIdentificationIJCNN2007.pdf|Presentation-IJCNN2007]]&lt;br /&gt;
&lt;br /&gt;
* '''International Workshop on Fuzzy Logic and Applications (WILF 2007), Ruta di Camogli, Genova, Italy'''&lt;br /&gt;
&lt;br /&gt;
: '''Fuzzy k-NN Lung Cancer Identification by an Electronic Nose''', Blatt Rossella, Bonarini Andrea, Calabrò Elisa, Della Torre Matteo, Matteucci Matteo, Pastorino Ugo. Proceedings of the 7th International Workshop on Fuzzy Logic and Applications, WILF 2007, Lecture Notes in Computer Science (LNAI), LNAI 4578, pages 261-268, Springer. Camogli (GE), Italy, July 2007.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Photos and videos ===&lt;br /&gt;
&lt;br /&gt;
=== Link to source code of the software written for the project ===&lt;br /&gt;
&lt;br /&gt;
=== Description and results of experiments ===&lt;br /&gt;
&lt;br /&gt;
=== Useful internet links ===&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	<entry>
		<id>https://airwiki.deib.polimi.it/index.php?title=Brain-Computer_Interface&amp;diff=3375</id>
		<title>Brain-Computer Interface</title>
		<link rel="alternate" type="text/html" href="https://airwiki.deib.polimi.it/index.php?title=Brain-Computer_Interface&amp;diff=3375"/>
				<updated>2008-06-09T14:45:00Z</updated>
		
		<summary type="html">&lt;p&gt;RossellaBlatt: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;A Brain-Computer Interface (BCI) is an experimental communication system that allows an individual to control a device by using signals from the brain (e.g., electroencephalography -- EEG).&lt;br /&gt;
&lt;br /&gt;
You can find a longer description on the [http://airlab.elet.polimi.it/index.php/airlab/research_areas/biosignal_analysis?z=2299 AIRLab page].&lt;br /&gt;
&lt;br /&gt;
The BCI project is in the [[BioSignal_Analysis]] area.&lt;br /&gt;
&lt;br /&gt;
== Ongoing projects ==&lt;br /&gt;
&lt;br /&gt;
* GA for feature extraction&lt;br /&gt;
* [[Online P300 and ErrP recognition with BCI2000]]  (Master thesis, Andrea Sgarlata).&lt;br /&gt;
* [[BCI based on Motor Imagery]]&lt;br /&gt;
** [[Sensorymotor Rhythms Detection and Control]] (Master thesis, Tiziano D'Albis; Bachelor thesis, Fabio Beltramini)&lt;br /&gt;
** [[Feature Selection and Extraction for a BCI based on motor imagery]] (Master thesis, Francesco Amenta)&lt;br /&gt;
* [[Graphical user interface for an autonomous wheelchair]] (Roberto Massimini)&lt;br /&gt;
&lt;br /&gt;
== Finished projects ==&lt;br /&gt;
&lt;br /&gt;
* Tesi di Carlo Gimondi e Luisella Messana &lt;br /&gt;
* Tesi di Gianmaria Visconti&lt;br /&gt;
* Tesi di Francesco Cartella&lt;/div&gt;</summary>
		<author><name>RossellaBlatt</name></author>	</entry>

	</feed>