Areas of Application

Facial Action Coding System: A Tool for Evaluating and Understanding Facial Expressions & Emotion

The Facial Action Coding System (FACS) has provided the scientific community with a systematic way of coding muscle movements in the face and then impartially classifying specific muscle combinations into emotion categories. The coding system developed by Dr. Paul Ekman is the most widely used tool for classifying facial behavior and in turn interpreting people’s emotional states at any given moment in any situation.

So much emphasis has been placed on using FACS in research because of the link between emotions and how they are ‘universally’ expressed on the face. FACS has played an integral part in the breakdown and identification of the various muscles and muscle combinations involved in each of the universal facial expressions. As such, FACS has also been an integral part of the research into deception detection, interpersonal communication, consumer engagement, psychological states such as pain, computer animation and more.

Why does FACS work?

1.   FACS can identify the slightest movements in facial musculature uncovering even the subtlest emotional reactions in subjects.

     Besides macro facial expressions or emotions, which are obvious expressions meant to openly communicate one’s feelings and micro facial expressions, which are signs of hidden emotions displayed through brief flicker-like flashes of emotion, there are also subtle expressions of emotion. Facial musculature changes associated with subtle expressions of emotion occur when a person is just starting to feel an emotion or the emotion is of low intensity. Usually in a low emotional state, the expression will not only be of weak intensity but only partially appear on the face. For example, instead of a full disgust where one would see a nose wrinkle, a subtle raise in the upper lip on just one side of the face equally divulges a subject’s true emotion. A FACS trained eye can easily pick up these subtle yet reliable and very telling clues to emotion.

 

2.   FACS can identify the exact emotion felt as well as the time frame including start/peak/end and duration of an emotional reaction.

     Each of the universal emotions is composed of a series of action units (corresponding to specific muscle movements in the face). The combination of action units specifically identifies what emotion is felt at the time the expression is shown. FACS can reliably differentiate between anger, contempt, fear, surprise, joy, disgust, distress as well as combinations of these emotions. Additionally, by recording time markers for emotional change and when an emotion was felt the strongest one can directly gauge and align the emotional response to the external stimulus. Subjects engaged in a task react to particular stimuli and FACS provides a way to identify the “how” and the “when” of a subject’s emotive state.

 

3.   FACS can identify all facially expressed emotional reactions across a given sequence or stimulus duration.

      Emotions are brief and fleeting. People can feel happy one moment, scared the next, and angry the following. Often we see emotional blends such as surprise and happy or anger and disgust occurring at the same time. These are common and natural emotional occurrences. FACS as a coding and emotion interpretation tool can distinguishes each emotion and identify each of the observable emotional reactions a subject displays within a given stimulus. By using time markers, each emotional reaction can be associated directly to the given stimulus at the time the emotion is felt providing an accurate account of not only what initiated the emotional response but also recording the duration and changes in emotional reactions across stimuli. This process creates a ‘behavioral map’ of sorts, mapping out the changes of positive and negative emotions within a given time frame.

 

4.   FACS can capture emotional reactions during both speaking and non-speaking occurrences.

     The face emotes not only while a person is speaking but also while they are listening, observing or silently engaging. Recording visible and involuntary reactive emoting during non-speaking segments provides a host of valuable information about a person’s state of mind. FACS can capture intimate information about a subject’s emotional reactions while listening, observing or engaging in a task.  

 

5.   FACS is a non-invasive tool allowing for subjects to remain unaware/impervious to the behavioral analysis.

      Except for the use of a camera to capture a subject’s interview, the use of FACS in identifying expressions of emotion is a completely non-invasive process. As long as the camera is placed at an appropriate distance with a close-up of the subject’s face, there is no need for multiple cameras or close-up machinery to a subject. FACS allows for the observation of naturally occurring emotional responses as opposed to self-reported interpretations of what a subject thinks they feel. Previous research conducted using cameras to record subjects emotional responses for FACS coding has successfully collected genuine responses and shown that subjects forget about the cameras once the task has begun.

 

Emotions play a major role in our everyday experiences and facial expressions are the primary nonverbal mechanism through which people communicate how they feel. The use of verbal and nonverbal cues together as a mode of human behavior interpretation has proven to be a very powerful method at decoding what people are truly thinking and feeling. Many in the professional fields have already learned that relying simply on what people say often does not yield the whole truth and nothing but the truth. While people do have voluntary control over their facial expressions, there is a strong evolutionary component that betrays even the most practiced professionals at disguising their true feelings betraying a person’s true feelings. FACS is a tool that can dissect facial behavior and in many ways expose the face, making it a rich source of information.