Interpretation of facial expressions and movements of the head

From AIRWiki
Revision as of 13:49, 21 April 2010 by CristianMandelli (Talk | contribs) (Thesis)

Jump to: navigation, search
Interpretation of facial expressions and movements of the head
Coordinator: AndreaBonarini (andrea.bonarini@polimi.it)
Tutor: MatteoMatteucci (matteo.matteucci@polimi.it)
Collaborator: SimoneTognetti (tognetti@elet.polimi.it)
Students: CristianMandelli (cristianmandelli@gmail.com)
Research Area: Affective Computing
Research Topic: Emotion from Interaction
Start: 2007/11/09
End: 2008/11/09
Status: Closed
Level: Bs
Type: Thesis

Project description

The objective of this project was the interpretation of facial expressions and movements of the head and upper part of the body. We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:

  • Face detection: algorithm for detecting a face in each video frame.
  • Blob Analysis: algorithm for eyes and eyebrown detection.

The System works on a three-level analysis:

1. 1st level: At this level we work on frame analysis in order to extract only the face area of the image.

2. 2nd level: Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied

3. 3rd level: At this level, data elaboration and movement analysis take place.

We used [OpenCV] library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.

A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.

The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.

On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.

Thesis

Analisi di immagini per l'identificazione del volto e dei suoi movimenti Media:CristianMandelli-Thesis.pdf

Face Detector and Blob detector code Media:CristianMandelli-Code-FaceAnalysis.zip

Laboratory work and risk analysis

Laboratory work for this project will be mainly performed at AIRLab/DEI and at home. Risks are related to the use of PC and camera.