From facial gestures, we can extract many kinds of messages in human communication: they represent visible speech signals and clarify whether our current focus of attention is important, funny or unpleasant for us. They are direct, naturally preeminent means for humans to communicate their emotions (Russell and Fernandez-Dols, 1997). Automatic analyzers of subtle facial changes, therefore, seem to have a natural place in various vision systems including automated tools for psychological research, lip reading, bimodal speech analysis, affective computing, face and visual-speech synthesis, and perceptual user interfaces.