Interpretation of facial expressions and movements of the head
Project profile
Thesis Title
Analisi di immagini per l'identificazione del volto e dei suoi movimenti
Project short description
We want to develop a strong face analysis algorithm which can be integrated with emotion recognition algorithm during driving.
This system can recognize high stress level and therefore modify the car's behaviour. We reach this goal by developing a system that is able to video capture the movements of head, eyes and eyebrows. In order to reach the aforementioned goal, we used face detection and blob analysis algorithms; in order:
- Face detection: algorithm for detecting a face in each video frame.
- Blob Analysis: algorithm for eyes and eyebrown detection.
The System works on a three-level analysis:
1. 1st level: At this level we work on frame analysis in order to extract only the face area of the image.
2. 2nd level: Only once face recognition has succeeded, eyes and eyebrows detection and extraction are applied
3. 3rd level: At this level, data elaboration and movement analysis take place.
We used OpenCV library to develop the first and the second level of analysis. The Open Computer Vision Library has more than 500 algorithms, documentation and sample code for real time computer vision.
A recognition process can be much more efficient if it is based on the detection of features that encode some information about the class to be detected. This is the case of Haar-like features that endode the existence of oriented contrasts between regions in the image. A set of these features can be used to encode the contrasts exhibited by a human face and their spacial relationships. Haar-like features are so called because they are computed similarly to the coefficients in Haar wavelet transforms.
The object detector of OpenCV has been initially proposed by Paul Viola and improved by Rainer Lienhart. First, a classifier (namely a cascade of boosted classifiers working with haar-like features) is trained with a few hundreds of sample views of a particular object (i.e., a face or a car), called positive examples, that are scaled to the same size (say, 20x20), and negative examples - arbitrary images of the same size.
On the third level we used a blob analysis algorithm developed by professor M. Matteucci. This algorithm allows (in our case) to detect dark regions in the face image which was extracted at level 2.