ISCA - International Speech
Communication Association


ISCApad Archive  »  2012  »  ISCApad #163  »  Events  »  Other Events  »  (201022)LREC Multimodal Corpora: How should multimodal corpora deal with the situation?

ISCApad #163

Wednesday, January 11, 2012 by Chris Wellekens

3-3-31 (201022)LREC Multimodal Corpora: How should multimodal corpora deal with the situation?
  

Multimodal Corpora: How should multimodal corpora deal with the situation?

1st Call for Papers
22 May 2012, Istanbul, Turkey

http://www.multimodal-corpora.org/

Currently, the creation of a multimodal corpus involves the recording, annotation and analysis of a selection of many possible communication modalities such as speech, hand gesture, facial expression, and body posture. Simultaneously, an increasing number of research areas are transgressing from focused single modality research to full-fledged multimodality research. Multimodal corpora are becoming a core research asset and they provide an opportunity for interdisciplinary exchange of ideas, concepts and data. The increasing interest in multimodal communication and multimodal corpora evidenced by European Networks of Excellence and integrated projects such as HUMAINE, SIMILAR, CHIL, AMI, CALLAS and SSPNet; the success of recent conferences and workshops dedicated to multimodal communication (ICMI-MLMI, IVA, Gesture, PIT, Nordic Symposium on Multimodal Communication, Embodied Language Processing); and the creation of the Journal of Multimodal User Interfaces also testifies to the growing interest in this area, and the general need for data on multimodal behaviours.
In 2012, the 8th Workshop on Multimodal Corpora will again be collocated with LREC. This year, LREC has selected Speech and Multimodal Resources as its special topic. This points to the significance of the workshop’s general scope, and the fact that the main conference special topic largely covers the broad scope of the workshop provides us with a unique opportunity to step outside the boundaries and look further into the future.
The workshop follows similar events held at LREC 00, 02, 04, 06, 08, 10, and ICMI 11. All workshops are documented under www.multimodal-corpora.org and complemented by a special issue of the Journal of Language Resources and Evaluation which came out in 2008 and a state-of-the-art book published by Springer in 2009.

Aims
As always, we aim for a wide cross-section of the field, with contributions ranging from collection efforts, coding, validation and analysis methods, to tools and applications of multimodal corpora. This year, however, we also want to look ahead and emphasize the fact that a growing segment of research takes a view of spoken language as situated action, where linguistic and non-linguistic actions are intertwined with the dynamic conditions given by the situation and the place in which the actions occur. In spite of this, most corpora capture little more than the linguistic and meta-linguistic actions per se, and contain little or no information about the situation in which they take place. For this reason, we encourage contributions that raise the question of what the additions to future multimodal corpora will be – with possibilities ranging from simple dynamic information such as background noise, room temperature, light conditions and room dimensions to more complex models of room contents, external events, scents, or cognitive load modelling including physiological data such as breathing or pulse. We hope that with your help, the workshop will serve to examine the way language is conceived in corpus creation and to spark a discussion of its boundaries and how these should be accounted for in annotations and in interpretation.

Time schedule
The workshop will consist of a morning session and an afternoon session. There will be time for collective discussions.

Topics
The LREC'2012 workshop on multimodal corpora will feature a special session on the collection, annotation and analysis of corpora of situated interaction.

Other topics to be addressed include, but are not limited to:

  • Multimodal corpus collection activities (e.g. direction-giving dialogues, emotional behaviour, human-avatar interaction, human-robot interaction, etc.) and descriptions of existing multimodal resources
  • Relations between modalities in natural (human) interaction and in human-computer interaction
  • Multimodal interaction in specific scenarios, e.g. group interaction in meetings
  • Coding schemes for the annotation of multimodal corpora
  • Evaluation and validation of multimodal annotations
  • Methods, tools, and best practices for the acquisition, creation, management, access, distribution, and use of multimedia and multimodal corpora
  • Interoperability between multimodal annotation tools (exchange formats, conversion tools, standardization)
  • Collaborative coding
  • Metadata descriptions of multimodal corpora
  • Automatic annotation, based e.g. on motion capture or image processing, and the integration with manual annotations
  • Corpus-based design of multimodal and multimedia systems, in particular systems that involve human-like modalities either in input (Virtual Reality, motion capture, etc.) and output (virtual characters)
  • Automated multimodal fusion and/or generation (e.g., coordinated speech, gaze, gesture, facial expressions)
  • Machine learning applied to multimodal data
  • Multimodal dialogue modelling

Important dates

  • Deadline for paper submission (complete paper): 12 February 2012
  • Notification of acceptance: 10 March
  • Final version of accepted paper: 26 March
  • Final program and proceedings: 20 April
  • Workshop: 22 May


Submissions
The workshop will consist primarily of paper presentations and discussion/working sessions. Submissions should be 4 pages long, must be in English, and follow the submission guidelines at
http://www.lrec-conf.org/lrec2012/
Submission should be made at: https://www.softconf.com/lrec2012/MMCorpora2012/
Demonstrations of multimodal corpora and related tools are encouraged as well (a demonstration outline of 2 pages can be submitted).

LREC Map of Language Resources, Technologies and Evaluation
When submitting a paper, from the START page authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that either have been used for the work described in the paper or are a new result of your research (contribution to building the LREC2012 Map).

Organizing committee
Jens Edlund, KTH Royal Institute of Technology, Sweden
Dirk Heylen, University of Twente, The Netherlands
Patrizia Paggio, University of Copenhagen, Denmark/University of Malta, Malta


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA