ISCA - International Speech
Communication Association

ISCApad Archive  »  2012  »  ISCApad #172  »  Events  »  Other Events  »  (2012-10-22) cfp participation and papers/ 2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012)

ISCApad #172

Sunday, October 07, 2012 by Chris Wellekens

3-3-2 (2012-10-22) cfp participation and papers/ 2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012)
2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012)

in conjunction with ACM ICMI 2012, October 22, Santa Monica, California, USA 

Register and download data and features: 



The Audio/Visual Emotion Challenge and Workshop (AVEC 2012) will be the second competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities, to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world.

We are calling for teams to participate in emotion recognition from acoustic audio analysis, linguistic audio analysis, video analysis, or any combination of these. As benchmarking database the SEMAINE database of naturalistic video and audio of human-agent interactions, along with labels for four affect dimensions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in the dimensions arousal, expectation, power and valence. Two Sub-Challenges are addressed: The Word-Level Sub-Challenge requires participants to predict the level of affect at word-level and only when the user is speaking. The Fully Continuous Sub-Challenge involves fully continuous affect recognition, where the level of affect has to be predicted for every moment of the recording.

Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio and video processing of emotive data, and the issues concerning combined audio-visual emotion recognition

Topics include, but are not limited to:

Audio/Visual Emotion Recognition:
. Audio-based Emotion Recognition
. Linguistics-based Emotion Recognition
. Video-based Emotion Recognition
. Social Signals in Emotion Recognition
. Multi-task learning of Multiple Dimensions 
. Novel Fusion Techniques as by Prediction 
. Cross-corpus Feature Relevance 
. Agglomeration of Learning Data 
. Semi- and Unsupervised Learning 
. Synthesized Training Material 
. Context in Audio/Visual Emotion Recognition 
. Multiple Rater Ambiguity

. Multimedia Coding and Retrieval
. Usability of Audio/Visual Emotion Recognition 
. Real-time Issues

Important Dates

Paper submission
July 31, 2012

Notification of acceptance
August 14, 2012

Camera ready paper and final challenge result submission 
August 18, 2012

October 22, 2012


Björn Schuller (Tech. Univ. Munich, Germany) 
Michel Valstar University of Nottingham, UK) 
Roddy Cowie (Queen's University Belfast, UK) 
Maja Pantic (Imperial College London, UK)

Program Committee

Elisabeth André, Universität Augsburg, Germany
Anton Batliner, Universität Erlangen-Nuremberg, Germany
Felix Burkhardt, Deutsche Telekom, Germany
Rama Chellappa, University of Maryland, USA
Fang Chen, NICTA, Australia
Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), Fance
Laurence Devillers, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), France
Julien Epps, University of New South Wales, Australia
Anna Esposito, International Institute for Advanced Scientific Studies, Italy
Raul Fernandez, IBM, USA
Roland Göcke, Australian National University, Australia
Hatice Gunes, Queen Mary University London, UK
Julia Hirschberg, Columbia University, USA
Aleix Martinez, Ohio State University, USA
Marc Méhu, University of Geneva, Switzerland
Marcello Mortillaro, University of Geneva, Switzerland
Matti Pietikainen, University of Oulu, Finland
Ioannis Pitas, University of Thessaloniki, Greece
Peter Robinson, University of Cambridge, UK
Stefan  Steidl, Uinversität Erlangen-Nuremberg, Germany
Jianhua Tao, Chinese Academy of Sciences, China
Fernando de la Torre, Carnegie Mellon University, USA
Mohan Trivedi, University of California San Diego, USA
Matthew Turk, University of California Santa Barbara, USA
Alessandro Vinciarelli, University of Glasgow, UK
Stefanos Zafeiriou, Imperial College London, UK

Please regularly visit our website for more information.

Back  Top

 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA