Affective systems have been used to categorize emotional expressions for the past two decades. Most of these systems are based on Paul Ekman's categorization scheme, known as the six universal emotions: Disgust, Fear, Happiness, Surprise, Sadness, and Anger. Although Ekman shows in his studies that these emotional categories are commonly inferred from facial expressions by most people, the way we express ourself is more natural, and thus, most of the times hard to categorize. Humans usually express themselves differently, sometimes even combining one or more characteristics of the so-called universal emotions. This is somehow embedded in the dimensional emotion representation, usually described as arousal and valence spaces.

Dealing with a set of restricted emotions, or a simple instantaneous emotion description, is a serious constraint for most applications focused on any kind of human interaction. Humans have the capability to adapt their internal emotion representation to a newly perceived emotional expression on the fly and use it to obtain a greater understanding of the emotional behavior of another person. This mechanism is described as a developmental learning process and after observing or participating in different interactions, humans can learn how to describe complex emotional behavior such as sarcasm, trust, and empathy.

Example from the One-Minute Gradual-Emotional Behavior challenge database, the OMG-Emotion dataset. Example from the One-Minute Gradual-Emotional Behavior challenge database, the OMG-Emotion dataset. Example from the One-Minute Gradual-Emotional Behavior challenge database, the OMG-Emotion dataset.

Frames extracted from videos of the OMG-Emotion dataset1.

Recent research trends within artificial intelligence, and even cognitive systems have approached computational models as a human-like perception categorization task. However, most of the research in the area is still based on instantaneous expression categorization, where the task is to describe a single emotion expression using different modalities. This diverges from the developmental aspect of emotional behavior perception and learning.

In recent years many corpora on what is known as emotion recognition in the wild were released. All of these datasets, although very challenging, are focused on instantaneous emotion categorization. This means that they set a specific label for a short-term (usually a couple of seconds) emotion expression. There are corpora which have annotated interactions, such as the IEMOCAP, SEMAINE, and EmoReact, however, they are limited to restricted and limited-context scenarios, which does not allow for the development of more naturalistic emotion description models.

Researchers have previously performed studies on long-term emotional behavior processing and learning, but most faced the problem of lacking a challenging corpus with long-term emotional relations that were annotated using a rigorous methodology. If such a corpus were available they would be able to evaluate their model and reproduce or compare their solutions' behaviors with the performance of others. Thus, this challenge focuses on long-term emotional behavior categorization for the community working on contemporary cognitive-level affective computing models. The One-Minute Gradual-Emotional Behavior dataset (OMG-Emotion dataset) provided during the challenge is a robust, complete, and challenging corpus which could act as the basis for reaching the next level of context processing within the field of affective computing.

OMG-Emotion Recognition Challenge important dates:
Publishing of training and validation data with annotations: March 14, 2018.
Publishing of the test data, and opening of the online submission: April 27, 2018.
Closing of the submission portal (Code and Results): April 30, 2018.
Closing of the submission portal (Paper): May 03, 2018.
Announcement of the winners: May 04, 2018.

Special session:
The OMG-Emotion Challenge will be held together with a special session on "Neural Models for Human Behavior Recognition". The participating teams should send us an abstract paper about their solution, which - if accepted - will be presented as an oral presentation during the WCCI/IJCNN 2018 conference.

Journal special issue:
Participating teams will be invited to submit extended versions of their abstract to a special issue to be arranged. Submissions will be peer reviewed consistent with the journal practices.


1. Published as creative-common videos in Youtube.