ISCA - International Speech
Communication Association

ISCApad Archive  »  2014  »  ISCApad #194  »  Jobs  »  (2014-03-23) Post-doc position at LIMSI-CNRS in the Spoken Language Processing group

ISCApad #194

Monday, August 04, 2014 by Chris Wellekens

6-12 (2014-03-23) Post-doc position at LIMSI-CNRS in the Spoken Language Processing group

Post-doc position at LIMSI-CNRS in the Spoken Language Processing group

A post-doc position is proposed at LIMSI-CNRS (Orsay, France -, in the context of the ANR-funded CHIST-ERA CAMOMILE Project (Collaborative Annotation of multi-MOdal, MultI-Lingual and multi-mEdia documents -

Context of the project

Human activity is constantly generating large volumes of heterogeneous data, in particular via the Web. These data can be collected and explored to gain new insights in social sciences, linguistics, economics, behavioural studies as well as artificial intelligence and computer sciences.
In this regard, 3M (multimodal, multimedia, multilingual) data could be seen as a paradigm of sharing an object of study, human data, between many scientific domains. But, to be really useful, these data should be annotated, and available in very large amounts. Annotated data is useful for computer sciences which process human data with statistical-based machine learning methods, but also for social sciences which are more and more using the large corpora available to support new insights, in a way which was not imaginable few years ago. However, annotating data is costly as it involves a large amount of manual work, and in this regard 3M data, for which we need to annotate different modalities with different levels of abstraction is especially costly. Current annotation framework involves some local manual annotation, with the help sometimes of some automatic tools (mainly pre-segmentation).
The project aims at developing a first prototype of collaborative annotation framework on 3M data, in which the manual annotation will be done remotely on many sites, while the final annotation will be localized on the main site. Furthermore, with the same principle, some systems devoted to automatic processing of the modalities (speech, vision) present in the multimedia data will help the transcription, by producing automatic annotations. These automatic annotations are done remotely in each expertise point, which will be then combined locally to produce a meaningful help to the annotators.
In order to develop this new annotation concept, we will test it on a practical case study: the problem of person annotation (who is speaking?, who is seen?) in video, which needs collaboration of high level automatic systems dealing with different media (video, speech, audio tracks, OCR, ...). The quality of the annotated data will be evaluated through the task of person retrieval.
This new way to envision the annotation process, should lead to some methodologies, tools, instruments and data that are useful for the whole scientific community who have interest in 3M annotated data.

Requirements and objectives

A PhD in a field related to the project (speech processing, computer vision or machine learning) is required. The candidate will perform research on multimodal person recognition in videos (speaker recognition, face recognition or multimodal fusion) and will also be involved with the partners in the development of the distributed annotation framework. Knowledge of JavaScript and Python programming languages is needed for working on this framework. Salary will follow CNRS standard rules for contractual researchers, according to the experience of the candidate.


  • Claude Barras (Claude.Barras [at]
  • Hervé Bredin (Herve.Bredin [at]
  • Gilles Adda (Gilles.Adda [at]


  • Opening date: May 2014
  • Duration: 18 months


Back  Top

 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA