ISCA - International Speech
Communication Association


ISCApad Archive  »  2014  »  ISCApad #196  »  Jobs

ISCApad #196

Sunday, October 12, 2014 by Chris Wellekens

6 Jobs
6-1(2014-05-12) Proposal for an INRIA PhD fellowship (Cordi-S)

Proposal for an INRIA PhD fellowship (Cordi-S)

Title of the proposal:

Nonlinear speech analysis for differential diagnosis between

Parkinson's disease and Multiple-System Atrophy

Project Team INRIA: GeoStat (http://geostat.bordeaux.inria.fr/)

Supervisor: Khalid Daoudi (khalid.daoudi@inria.fr)

Scientific context:

Parkinson's disease (PD) is the most common neurodegenerative disorder after Alzheimer's disease.

Prevalence is 1.5% of the population over age 65 and affects about 143,000 French. Given the aging of

the population, the prevalence is likely to increase over the next decade.

Multiple-System Atrophy (MSA) is a rare and sporadic neurodegenerative adult disorder, of

progressive evolution and of unknown etiology. The MSA has a prevalence of 2 to 5/100 000 and has

no effective treatment. It usually starts in the 6th decade and there is a slight male predominance. It

takes 3 years on average from the first signs of the disease for a patient to require a walking aid, 4-6

years to be in a wheelchair and about 8 years to be bedridden.

The PD and MSA require different treatment and support. However, the differential diagnosis between

PD and MSA is a very difficult task because, at the early stage of the diseases, patients look alike as

long as signs, such as dysautonomia, are not more clearly installed for MSA patients. There is currently

no valid clinical nor biological marker for clear distinction between the two diseases at an early stage.

Goal:

Voice and speech disorders in Parkinson's disease is a clinical marker that coincides with a motor

disability and the onset of cognitive impairment. Terminology commonly used to describe these

disorders is dysarthria [1,2].

Like PD patients, depending on areas of the brain that are damaged, people with AMS may also have

speech disorders: difficulties of articulation, staccato rhythm, squeaky or muted voice. Dysarthria in

AMS is more severe and early in the sense that it requires more early rehabilitation compared to PD.

Since dysarthria is an early symptom of both diseases and of different origin, the purpose of this thesis

is to use dysarthria, through digital processing of voice recordings of patients as a mean for objective

discrimination between PD and MSA. The ultimate goal is to develop a numerical dysarthria measure,

based on the analysis of the speech signal of the patients, which allows objective discrimination

between PD and MSA and would thus complement the tools currently available to neurologists in the

differential diagnosis of the two diseases.

Project:

Pathological voices, such as in PD and MSA, generally present high non-linearity and turbulence [3].

Nonlinear/turbulent phenomena are not naturally suited to linear signal processing. The latter is

however ruling over current speech technology. Thus, from the methodological point of view, the goal

of this thesis is to investigate the framework of nonlinear and turbulent signals and systems, which is

better suited to analyzing the range of nonlinear and turbulent phenomena observed in pathological

voices in general, and in PD and MSA voices in particular. We will adopt an approach based on novel

nonlinear speech analysis algorithms recently developed in the GeoStat team [4] and which led, in

particular, to new and promising techniques for pathological voice analysis. The goal will be to extract

relevant speech features to design new dysarthria measures that enable accurate discrimination between

PD and MSA voices. This will also require investigation of machine learning theory in order to develop

robust classifiers (to discriminate between PD and MSA voices) and to make correspondence

(regression) between speech measures and standard clinical rates.

The PhD candidate will actively participate, in coordination with neurologists from the Parkinson's

Center of Pellegrin Hospital in Bordeaux, in the set up of the experimental protocol and data collection.

The latter will consist in recording patient's voices using a numerical recorder and the DIANA/EVA2

workstations (http://www.sqlab.fr/).

References:

[1] Pinto, S et al. Treatments for dysarthria in Parkinson's disease. The Lancet Neurology. Vol 3, Issue 9, 2004.

[2] Auzou, P.; Rolland, V.; Pinto, S., Ozsancak C. (eds.). Les dysarthries. Editions Solal. 2007.

[3] Tsanas A et al. Novel speech signal processing algorithms for high-accuracy classification of Parkinson’s

disease . IEEE Transactions on Biomedical Engineering, 2012; 59 (5):1264-1271.

[4] PhD thesis of Vahid Khanagha. GeoStat team, INRIA Bordeaux-Sud Ouest. January 2013.

http://geostat.bordeaux.inria.fr/images/vahid%20khanagha%204737.pdf

Duration: 3 years (starting fall 2014)

Net Salary: ~1700 / month (including health care insurance)

Prerequisites: Good level in signal/speech processing is necessary, as well as Matlab and C/C++

programing. Knowledge in machine learning would be a strong advantage.

Candidates should send a CV to khalid.daoudi@inria.fr and also apply via the Inria website:

http://www.inria.fr/en/institute/recruitment/offers/phd/campaign-2014/(view)/details.html?

id=PNGFK026203F3VBQB6G68LOE1&LOV5=4509&LG=EN&Resultsperpage=20&nPostingID=83

24&nPostingTargetID=14059&option=52&sort=DESC&nDepartmentID=28

Top

6-2(2014-05-12) Thèse à Orange Labs

Thèse à Orange Labs: analyse en locuteurs d'une collection de documents multi-média

 

Les personnes présentes dans les contenus multi-média constituent une méta-donnée clé pour la recherche et la navigation dans les contenus.

 

L’analyse en personne d’un document multi-média, sur sa composante audio, implique d’abord une étape de segmentation en locuteurs en tours de parole, puis de regroupement des tours de parole venant du même locuteur. Ensuite, une étape d’extraction de caractéristiques de ce locuteur (en rôle par exemple), et une étape d’identification de ce locuteur sont possibles. L’identification du locuteur peut être réalisée soit à l’aide de caractéristiques biométriques, impliquant l’existence préalable d’un modèle biométrique de la voix du locuteur, soit à l’aide d’un modèle d’inférence de l’identité, à partir d’informations permettant de nommer les locuteurs de façon non-ambigüe (par exemple en utilisant les contextes des noms détectés dans les caractères incrustés à l’écran ou dans la parole ou dans les sous-titres).

 

Alors que l’immense majorité des traitements d’analyse en personnes des contenus multi-média a été jusqu’à présent focalisée sur l’analyse des documents audio pris isolément, les études récentes en segmentation et regroupement en locuteurs abordent l’aspect « inter-contenu » (apparaissant dans la littérature sous les termes « cross-show speaker diarization », « speaker linking » ou « speaker attribution »), pour associer les tours de parole d’un même locuteur, à travers différents contenus. L’approche proposée dans cette thèse est d’approfondir cet aspect inter-contenu, en abordant l’analyse en locuteurs sous l’angle des collections, où la collection est définie comme un ensemble de documents audiovisuels présentant des caractéristiques communes (e.g. nom de l’émission, date de diffusion, thème, etc).

 

Cette approche par collections doit permettre d’une part d’améliorer robustesse et performances, et d’autre part d’offrir une représentation synthétique de la collection en termes de personnes ainsi que de nouveaux modes d’exploration de la collection, par l’analyse des relations entre les personnes présentes dans cette collection.

Par exemple, si la collection est constituée de plusieurs épisodes d’une même émission, l’objectif pourrait être d’inférer la structure de l’émission (présentateur, chroniqueur, invités) et d’identifier en particulier les invités. Si la collection concerne des documents relatifs à l’actualité sur une période temporelle courte, l’analyse en locuteurs de cette collection permettrait d’étudier un évènement à travers l’ensemble de ses acteurs, et pourrait compléter de façon pertinente les technologies de suivi d’actualité.

 

La thèse se déroulera dans les locaux d’Orange Labs, à Lannion, sous la forme d’un CDD de 36 mois, avec une rémunération motivante.

Elle s’adresse à un étudiant diplômé du 2ème cycle (master2 ou ingénieur), ayant des compétences en traitement automatique de la parole, et/ou fouilles de données et apprentissage automatique

Pour plus d’informations : http://orange.jobs/jobs/offer.do?joid=38569&lang=fr&wmode=light

Ou contacter directement : delphine.charlet@orange.com

 

Top

6-3(2014-05-15) Post-Doc Position 'Visual analysis and synthesis of affects', Univ.Bordeaux, F
Post-Doc Position 'Visual analysis and synthesis of affects'

Description : 
In the communication between men or between men and machines, the expression of mental states, emotional, feelings, intentions or attitudes of the speaker is a source of information that plays an important role in understanding the context of speech. The study of psychological and emotional states provides more evidence of the fundamental role of affects for communication understanding. Affects are expressed by different levels of audio and visual cognitive processes: some expressions cannot be voluntarily controlled (emotions) while others are intentional (attitudes and expressiveness language, choice of vocabulary and grammatical paraphrases). Affects depend on the culture which may lead to mis-interpretations in communication. They should therefore be thought of when learning foreign languages. Similarly, affects should be integrated in automatic translators. Affects are present both in the speech and the body language. We are here particularly interested in finding the v!
isual characteristics of the attitudinal expressions (seduction, irony, irritation, admiration etc.) in three cultures (US, Japan and France).
The postdoctoral fellow will first extract different spatio-temporal descriptors for gestural and facial expressions classification. Existing studies in cognitive sciences will be used to select the correct characteristics for each part of the face and body. If needed new descriptors will be designed. He will also have
to participate to validations through perception tests on panel of native speakers. In a second part, we are interested in generating attitudes by realizing morphing from a neutral expression to a given attitude. This second phase will also include improving our voice transformation technique.

Profile of applicant:
PhD in computer science, cognitive science or applied mathematics. A good experience with image processing
is required and good programming skills in Matlab and/or C/C++. Knowledge of speech processing techniques is definitely a plus.

Duration: 16 months
Job statuts: PostDoctoral researcher, full time
LOCATION : LaBRI, UMR 5800, Université de Bordeaux, Talence, France
DATE: As soon as possible

-details of announcement: http://cpu.labex.u-bordeaux.fr/Jobs/Post-doc_Visual-analysis-and-synthesis-of-affects,Job-140.html

Supervisors/Contact:
Aurélie Bugeau, LaBRI : aurelie.bugeau@labri.fr
Takaaki Shochi, CLLE-ERSSàB, LABRI :Takaaki.Shochi@u-bordeaux3.fr
Jean-Luc Rouas, LaBRI : jean-luc.rouas@labri.fr
Application:

A CV and a motivation letter have to be sent to Aurélie Bugeau, Takaaki Shochi and Jean-Luc Rouas.
Application deadline: July 2014

Top

6-4(2014-05-16) Offre de thèse en co-encadrement LIG (Grenoble) / DDL (Lyon)
Offre de thèse en co-encadrement LIG (Grenoble) / DDL (Lyon) - Démarrage Octobre 2014


Traitement automatique de la parole pour l'aide à la description de langues africaines.

 

Cette thèse financée sur le projet ANR ALFFA (African Languages in the Field, Speech Fundamentals and Automation), en co-encadrement entre un laboratoire d'informatique et un laboratoire de linguistique, consiste à proposer et évaluer l'apport des outils automatiques de traitement de la parole pour aider les linguistes de terrain dans leur travail de description des langues (enregistrement de corpus, analyse phonétique, etc.).

En plus du travail opérationnel sur la projet ALFFA (participation à la vie du projet, construction de systèmes de reconnaissance automatique de la parole pour diverses langues africaines), la partie exploratoire de cette thèse sera consacrée à la proposition d'outils et méthodes (de préférence sur supports mobiles) pour l'analyse de terrain assistée par la machine (segmentation de signaux de parole, analyses prosodiques, étiquetage auto. par alignement force, etc), et à leur évaluation sur des cas d'usage concrets (analyse à grand échelle de particularités phonologiques de langues en danger, etc.).

Des déplacements en Afrique de l'ouest sont à prévoir dans le cadre de la thèse.

 

Profil recherché:

 

-Indispensable: Master ou Ingénieur Info. ; expérience dans le domaine du traitement de la parole

-Ce qui serait un plus: intérêt pour les langues (phonétique , linguistique) ; expérience en développement sur applications mobiles

-Autres qualités : qualité de rédaction (pour publications dans des conférences telles que Interspeech, Labphon, etc.) et de communication, travail en équipe

 

Résumé du projet ALFFA :

Le nombre de langues parlées en Afrique varie de 1 000 à 2500, selon les estimations et les définitions. Les états monolingues n'existent pas vraiment sur ce continent car les langues traversent les frontières. Le nombre de langues varie de 2 ou 3, au Burundi et au Rwanda, à plus de 400 au Nigeria. Le multilinguisme est en effet omniprésent dans les sociétés subsahariennes d'Afrique. 

Aujourd'hui, les conditions sont très favorables au développement d'un marché pour le traitement de la parole pour les langues africaines. L'accès des populations aux TIC se fait principalement par mobile (et clavier) et la nécessité de services vocaux peut être mise en évidence dans tous les secteurs: des plus prioritaires (santé, alimentation), aux plus ludiques (jeux, réseaux sociaux).

Pour cela, surmonter la barrière de la langue est nécessaire et c'est ce que nous proposons dans ce projet où deux aspects principaux sont concernés: les aspects fondamentaux de l'analyse du langage parlé (description des langues, phonologie, dialectologie) et les technologies de la parole (reconnaissance et synthèse) pour les langues africaines. Le projet ALFFA est interdisciplinaire puisqu'il ne réunit pas seulement des experts en technologie (LIA, LIG, VOXYGEN), mais inclut aussi des linguistes sur le terrain et des phonéticiens (DDL). Dans le projet, les technologies développées seraient utilisées pour créer des micro services vocaux pour les téléphones mobiles en Afrique (par exemple, un service téléphonique pour consulter le prix des denrées alimentaires ou fournir des informations locales, etc.). 

 
Site Web du projet : http://http://alffa.imag.fr 
 
 
Top

6-5(2014-05-17) Research Assistant position, Laboratoire Parole et Langage, Aix-en-Provence, France
Call for application

A five-month Research Assistant position is open for application at the Laboratoire Parole et Langage, Aix-en-Provence, France. 
The successful candidate will participate in the SPIC (Speaking in Concert: Cerebral, articulatory and acoustic convergence 
between speakers in conversational interaction) research project, jointly conducted by Luciano Fadiga, Leonardo Badino and 
Alessandro D'Ausilio (Italian Institute of Technology, Genova, Italy) and Noël Nguyen, Simone Falk and
 Thierry Legou (Laboratoire Parole et Langage). The goal of this project is to better understand what makes speakers 
sound more like each other in a conversational interaction, by means of simultaneously recorded cerebral, articulatory and 
acoustic data. The position is funded by the Brain and Language Research Institute at Aix-Marseille University (blri.fr).

At the end of the contract, the possibility will be offered to the candidate to start a PhD with a joint affiliation to the IIT and 
the LPL, in the framework of the same project.

Qualifications: Master's degree in any of the following disciplines: Speech and Language Sciences, Speech and Language
 Technology, Neurosciences, Cognitive Sciences. A strong background in computer science and mathematics is necessary. 
Experience with EEG or articulatory measures in speech production will be appreciated.

Salary: 1760 euros / month.

Starting date: as early as June 2014.

How to apply: Applicants should send a cover letter, a CV and the names and contact information of two references to 
Noel Nguyen (noel.nguyen@lpl-aix.fr).

Links:
. Laboratoire Parole et Langage: http://www.lpl-aix.fr
. Robotics, Brain and Cognitive Sciences Department, Italian Institute of Technology: 
http://www.iit.it/en/research/departments/robotics-brain-and-cognitive-sciences.html
. Brain and Language Research Institute: http://blri.fr
Top

6-6(2014-05-19) THÈSE–ASLAN 2014-2017, Univ. Lyon 1-2, F

 

OFFRE DE THÈSE–ASLAN 2014-2017

BABILLAGE ET ORALITE ALIMENTAIRE

La candidature à retourner par voie électronique à Sophie Kern (Sophie.Kern@univ-lyon2.fr) et Mélanie Canault (melanie.canault@univ-lyon1.fr).

Cadre de la thèse

Cette thèse, financée par le laboratoire d’excellence ASLAN (Advanced studies on language complexity). Le financement est de l’ordre de 1 350 €net par mois sur une durée de 3 ans.

 Responsables scientifiques :

Sophie KERN et Mélanie CANAULT

 Laboratoire de rattachement : laboratoire DDL Dynamique Du Langage (UMR5596 CNRS – Université Lumière Lyon 2, Lyon, France)

 Date de recrutement : septembre 2014 - octobre 2014

 Date limite de candidature : 31 mai 2014

 Documents demandés : un CV détaillé accompagné d’une lettre de motivation. Le CV devra clairement faire état du parcours universitaire et des compétences acquises par le candidat. Les relevés de notes de master 1 et 2 sont également exigés.

Profil du candidat

Cette proposition de thèse s’adresse principalement à des étudiants titulaires d’un Master en sciences du langage ou en sciences cognitives. Le candidat sélectionné devra être intéressé par l’acquisition du langage chez le très jeune enfant. Les candidatures étrangères sont recevables à condition que le candidat ait une excellente maîtrise du français, à l’oral comme à l’écrit.

Dans l’idéal, le candidat devra présenter des connaissances dans l’un ou plusieurs des domaines suivants :

- Psycholinguistique développementale.

- Phonétique acoustique utilisation de logiciels de traitement du signal : Praat®.

 

- Oralité alimentaire

Et avoir une expérience dans l’expérimentation avec les jeunes enfants.

Description du projet

Le développement oro-moteur au cours de la première année de vie est un processus extrêmement riche et complexe qui va conduire le jeune enfant sur le chemin du développement linguistique.

La période du babillage (6-12 mois) est souvent décrite comme une étape cruciale du processus d’acquisition du langage au cours de laquelle le potentiel articulatoire du bébé va considérablement progresser. Cette période est très facilement identifiée par les parents car elle correspond à l’émergence des premières syllabes. Ces dernières seraient le résultat de la superposition du mouvement vertical de la mandibule à la phonation (MacNeilage 1998).

Au stade du babillage, la mandibule est donc un articulateur dominant. Cela s’explique par les liens anatomiques, cérébraux et moteurs existants entre l’activité de parole et celle de nutrition (Luschei & Goldberg 1981, Lund et Enomoto 1988, Rizzolatti et al. 1996, Fogassi et Ferrari 2005), et c’est en partie pour ces raisons que les professionnels du langage établissent un lien étroit entre le développement de l’oralité alimentaire et celui du langage (Rééducation Orthophonique 2004).

La mandibule est ainsi directement impliquée dans le développement de l’oralité. Néanmoins, le contrôle moteur du bébé est immature et ses mouvements mandibulaires sont plus lents que ceux de l’adulte. En effet, la parole adulte s’établit sur un rythme s’élevant à 5-6Hz (Jürgens 1998, Lindblom 1983) alors que les productions précoces avoisinent les 2.5-3Hz (Dolata 2008). Le timing des mouvements mandibulaires doit donc se réorganiser au cours du développement. On émet l’hypothèse que des changements importants surviendraient au cours de la premières année, d’une part, parce que des études ont montré que les patrons cinématiques de la mandibule se rapprocheraient de ceux de l’adulte dès l’âge d’un an (Green et al 2000, 2002) et d’autre part, parce que des travaux préliminaires nous ont permis (Canault & Laboissière 2011, Fouache & Malcor 2013) de montrer, grâce à l’observation de la durée syllabique, qu’une accélération de l’oscillation mandibulaire s’amorçait entre l’âge de 8 mois et celui de 12 mois. Toutefois, les tendances dégagées doivent être confirmées.

L’enjeu parait important au vu du caractère prédictif du babillage. Des travaux ont en effet déjà fait ressortir que les productions du babillage et des premiers mots pouvaient rendre compte du potentiel articulatoire et communicatif ultérieur (Stoel-Gammon 1988, Stark et al. 1988, Oller et al. 1999, Levin 1999, Otapowicz et al. 2007, Nip et al. 2010), mais aucun ne s’appuie sur le paramètre temporel ni ne s’appuie sur les caractéristiques segmentales et structurelles des productions.

Objectifs

1. Définir des étapes charnières de l’oralité alimentaire au cours des deux premières années de vie.

 

2. Etablir le lien entre les étapes de l’oralité alimentaire et les étapes du babillage en termes de fréquences oscillatoires et de caractéristiques structurelles des énoncés (ex : fréquences des réduplication, contenu segmental….

3. Déterminer les liens entre les caractéristiques du babillage et le développement lexical.

Canault, M. & Laboissière, R. (2011). Le babillage et le développement des compétences articulatoires : indices temporels et moteurs. Faits de langues, 37, 173-188.

Dolata J.K., Davis, B.L. & MacNeilage, P.F. (2008). Characteristics of the rhythmic organization of vocal babbling: implications for an amodal linguistic rhythm. Infant behavior & development, 31 (3), 422-431.

Fogassi L. & Ferrari P.F. (2005). Mirror neurons, gestures and language evolution. Interaction Studies, 5 (3), 345-363.

Fouache, M. & Malcor M. (2013). Evolution de la fréquence d’oscillation mandibulaire du babillage canonique aux premiers mots. Mémoire d’orthophonie, Université Lyon1.

Green, J.R., Moore, C.A., Higashikawa, M. & Steeve, R.W. (2000). The physiologic development of speech motor control: lip and jaw coordination. Journal of Speech, Language, and Hearing Research, 43, 239-255.

Green, J.R., Moore, C.A. & Reilly, K.J. (2002). The sequential development of jaw and lip control for speech. Journal of Speech, Language, and Hearing Research, 45, 66-79.

Jürgens U. (1998). Speech evolved from vocalization, not mastication. Commentaire à MacNeilage P.F.(1998). The Frame/Content theory of evolution of speech production. Behavioral and Brain Sciences, 21, 519-520.

Kern, S. & Gayraud, F (2010). IFDC. Les éditions la cigale, Grenoble.

Levin K. (1999). Babbling in infants with cerebral palsy. Clinical Linguistics & Phonetics, 13 (4), 249-267.

Lindblom B. (1983). Economy of speech gestures. In The Production of Speech. MacNeilage P.F. (Ed.).New York, Springer, 217-245.

Lund J.P. & Enomoto S. (1988).The generation of mastication by the central nervous system. In Neural control of rhythmic movement. Cohen A., Rossignol S. & Grillenr S. (Eds.). New York, Wiley,41-72.

Luschei E.S. & Goldberg L.J. (1981). Mastication and voluntary biting. In Handbook of physiology: the nervous system, vol.2. Brooks V.B. (Ed.). Bethesda, American Physiological Society, 1237-1274.

MacNeilage P.F. (1998).The Frame/Content theory of evolution of speech production. Behavioral and Brain Sciences, 21, 499-546.

Nip I.S.B., Green J.R. & Marx D.B. (2010). The co-emergence of cognition, language, and speech motor control in early development : A longitudinal correlation study. Journal of Communication Disorders, 44 (2), 149-160.

Oller D.K., Eilers R.E., Neal A.R. & Schwartz H.K. (1999). Precursors to speech in infancy: the prédiction of speech and language disorders. Journal of Communication Disorders, 32, 223-245.

Otapowicz D., Sobaniec W., Kutak W., Sendrowski K. (2007). Severity of dysarthric speech in children with infantile cerebral palsy in correlation with the brain CT and MRI. Advances in Medical Sciences, 52, 188-223.

Les troubles de l’oralité alimentaire, Rééducation Orthophonique, 220, 2004.

Rizzolatti G., Fadiga L., Gallese V. & Fogassi L. (1996). Premotor cortex and the recognition of motor actions. Cognitive Brain Research, 3, 131-141.

Stark R.E., Ansel B.M. & BOND J. 1988. Are prelinguistics abilities predictive of learning disability? A follow-up study. In Preschool prevention of reading failure. Maslan R.L. & Masland M. (Eds.). Parkton, York Press.

Stoel-Gammon C. 1988. Prelinguistic vocalisations of hearing-impaired and normally hearing subjects: a comparison of consonantal inventories. Journal of Speech and Hearing Disorders, 53, 302-315.

Top

6-7(2014-05-26) PhD Fellowships at FONDAZIONE Bruno Kessler, Trento, Italy

PhD Fellowships to start in 2014

The HLT unit of FBK will sponsor three project specific grants for doctoral students starting with the A.Y. 2014-2015. Doctorate fellowships will be formally pursued at the ICT International Doctorate School of the University of Trento. The program starts between September and November 2014, has a minimum length of three years and includes attendance of courses during the first two years, while the third is fully dedicated to research work.

The opening of the call is expected at the beginning of May 2014. Potential candidates are invited to contact us in advance for preliminary interviews. There is also the possibility to begin with an internships at our group during the Summer before the official PhD program starts.  The call closes on 16 June 2014. 

Topic: Machine Translation

Title: Human in the loop for advanced machine translation

Nowadays, human translation and machine translation are no longer antithetical opposites. Rather, the two worlds are getting closer and started to complement each other. On one side, the evolution of translation industry is witnessing a clear trend towards the adoption of Machine Translation (MT) as a primary support to professional translators. On the other side, the variety of data that can be collected from human feedback provides to MT research an unprecedented wealth of knowledge about the dynamics (practical and cognitive) of the translation process. The future is a symbiotic scenario where humans are assisted by reliable MT technology that, at the same time, continuously evolves by learning from translators activity. This grant aims to transform this vision into reality. The candidate will team up a world-class research effort developing new MT technology capable to integrate information obtained unobtrusively from real professional translation workflows. Relevant topics include: i) the extraction and generalization of knowledge (e.g. translation and correction strategies) from different types of human feedback, ii) projecting the acquired knowledge onto the core MT components, iii) modeling cognitive aspects of the translation process, iv) evaluating the effect of machine translation on human translation.

Contacts: Marco Turchi and Matteo Negri

 

 

Topic: Automatic Speech Recognition

Title:  Acoustic Modeling for Speech Recognition 

FBK has been pursuing research in automatic speech recognition (ASR) for two decades with the goal to develop state-of-the-art technology for interactive- and found-speech recognition, and to address applications ranging from speech analytics over the phone line to transcription of speech as found in any audio/visual document.  Languages on which we are working with include Italian, English, Spanish, German, Dutch, Arabic, Turkish, Russian, Portuguese and French.

Although FBK is interested in applicants in all areas of automatic transcription technology, most relevant topics for this Call are in the areas of acoustic modeling for large vocabulary ASR (which includes, for example, neural networks in ASR, building ASR systems for under-resourced languages, speaker adaptive training, methods for fast and efficient adaptation to changing application domains, data selection methods for acoustic model training), speaker diarization and spoken language detection.  The candidate will team up a world-class research effort developing new ASR  technology and advancing beyond the state-of-the-art, taking advantage from the large experience gained by FBK during the last 20 years.

Contacts: Diego Giuliani

 

 

Topic: Content Processing

Title: Building the Web of Data exploiting Natural Language Processing

Web of Data is about making information available on the Web accessible to machines and hence transforming how information can be found and manipulated. Of all recent initiatives oriented to create the Web of data, Wikidata is the most relevant. According to the promoters “the project aims to build a free knowledge base about the world that can be read and edited by humans and machines alike.” In this PhD the candidate is asked to investigate natural language processing and machine learning techniques that can be used to automatically contribute to Wikidata. Specifically, it will be investigated semi-supervised approaches that can bootstrap from the data already available in Wikidata and other resources such as DBpedia and Freebase. Furthermore, careful consideration will have to be given to develop approaches applicable to different languages. Finally, as Wikidata will be edited by both humans and machines, active learning could play a crucial role and open new research challenges due to the crowd-sourcing approach: will the automatic approach be able to interact with the other users during the discussion necessary to collect/approve/filter the data to publish?

Contacts: Claudio Giuliano

Top

6-8Revue TAL: numéro spécial numéro spécial sur le traitement automatique du langage parlé

 

Special issue on spoken language processing

Guest editors: Laurent Besacier, Wolfgang Minker

 

Speech is the most natural way to communicate and interact (with the machine or with another person) . Spoken language processing and dialogue have now many direct applications in various areas such as (but not limited to) information retrieval, natural language interaction with mobile devices, social robotics, assistive technologies, technologies for language learning, etc. . However, spoken language processing poses specific problems related to the nature of the speech material itself. Indeed, spontaneous speech utterances have to be processed and they contain many paralinguistic features. For instance, disfluencies (repetitions , false starts, etc.) reduces the syntactic regularity of utterances. Moreover, spontaneous utterances convey rich information related to emotions , etc. Furthermore, automatic speech recognition (ASR) step, often required before the application of higher level processing (understanding , translation, analysis, etc.), produces noisy outputs (with errors ) which require robust and tight coupling between modules.

 We invite contributions on any aspect (theoretical, methodological and practical) of spoken language processing and oral communication ; in particular (non-exclusive list):

 -Automatic speech recognition

-Spoken language understanding

-Speech translation

-Text-to-Speech synthesis

-Man-machine dialogue

-Robust analysis of spoken language

-Analysis of social affects or emotions in spontaneous speech

-Mining spoken language documents

-Spoken language applications (mobile interaction, robotics, etc. )

-Technologies for language learning

-Multilingual aspects of spoken language processing

-Evaluation for spoken language processing

-Corpora and resources for spoken language

-(Spoken) discourse analysis

-Adaptive dialogue (context, user profile)

-Analysis of paralinguistic features in spoken language

 

IMPORTANT DATES

-call : march 2014
-submission of contributions : 30 june 2014
-first authors notification : 15 september 2014 
-publication : end 2014 / begin 2015

Submission format

LANGUAGE
Manuscripts may be submitted in English or French. French-speaking authors are requested to submit their contributions in French.

PAPER SUBMISSION
Papers must describe original, completed, and unpublished work.  Each submission will be reviewed by two programme committee members. 
Papers must be submitted on Sciencesconf platform  http://tal-55-2.sciencesconf.org/ 

Accepted papers will be maximum 25 pages long in PDF. Style sheets are available for download on the Web site of the TAL journal

Top

6-9(2014-05-18) (W/M) developer position at IRCAM for Large-Scale Audio Indexing

Position:         1 (W/M) developer position at IRCAM for Large-Scale Audio Indexing

Starting:         September 1st, 2014

Duration:       15 months

Deadline for application:    July 1st, 2014

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic recognition of genre or mood, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMDEV:

-------------------------

For this project IRCAM is looking for a good developer -for the C++ development of the audio content analysis technologies and -the development of the management system for the storage, access and search over distributed data (audio and meta-data). He/she will be also in charge of the development of scalable search algorithms.

 

Required profile:

-------------------------

*High skill in C++ development (including template-based meta-programming)

*High skill in scalable indexing technologies and distributed framework (hash-table, Hadoop, SOLR)

*High skill in database management systems

*Good Skills in Matlab, Python and Java

*Good knowledge of Linux, Mac OSX and Windows development environment (gcc, Intel and MSVC,

svn)

*High productivity, methodical work, excellent programming style.

 

The developer will collaborate with the project team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).  

 

Introduction to IRCAM:

-------------------------

IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies, and musicology. Ircam is located in the center of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

Salary:

-------------------------

According to background and experience

 

Applications:

-------------------------

Please send an application letter with the reference 201406BMDEV together with your resume and any suitable information addressing the above issues preferably by email to: peeters_at_ircam_dot_fr with cc to vinet_at_ircam_dot_fr, roebel_at_ircam_dot_fr.

 

 

Top

6-10(2014-05-29) (W/M) researcher positions SKAT-VG project at IRCAM, Paris

ENGLISH VERSION:

 

 

Positions: 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing

Starting:     August 18, 2014

Duration:     12 months

Deadline for application:   July, 1st, 2014

 

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic genre or mood recognition, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMRESA:

 

For this project IRCAM is looking for a researcher for the development of the technologies of automatic genre and mood recognition.

 

The hired Researcher will be in charge of the research and the development of scalable technologies for supervised learning (i.e. scaling GMM, PCA or SVM algorithms) to be applicable to millions of annotated data.

He/she will then be in charge of the application of the developed technologies for the training of large-scale music genre and music mood models and their application to large-scale music catalogues.

 

Required profile:

* High skill in audio indexing and data mining (the candidate must hold a PHD in one of these fields)

* Previous experience into scalable machine-learning models

* High-skill in Matlab programming, skills in C/C++ programming

* Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)

* Good knowledge of Linux, Windows, MacOS environments

* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Position description 201406BMRESB:

 

For this project IRCAM is looking for a researcher for the development of the technologies of audio fingerprint.

 

The hired Researcher will be in charge of the research and the development of audio fingerprint technologies that are robust to audio degradations (sound capture through mobile-phones in noisy environment) and fingerprint search algorithms in large-scale database (millions of music titles).

 

Required profile:

* High skill in audio signal processing and audio fingerprint design (the candidate must hold a PHD in one of these fields)

* High skill in indexing technologies and distributed computing (hash-table, Hadoop, SOLR)

* High-skill in Matlab programming, skills in Python and Java programming

* Good knowledge of Linux, Windows, MacOS environments

* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Introduction to IRCAM:

 

IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

 

Salary:

According to background and experience

 

 

Applications:

Please send an application letter with the reference 201406BMRESA or 201406BMRESB together with your resume and any suitable information addressing the above issues preferably by email to: peeters_a_t_ircam dot fr with cc to vinet_a_t_ircam dot fr, roebel_at_ircam_dot_fr

 

VERSION FRANCAISE:

 

 

Offre d’emploi : 2 postes de chercheur (H/F) à l’IRCAM pour technologies d’indexation audio à grande échelle

Démarrage : 18 Aout 2014

Durée : 12 mois

Date limite pour candidature: 1er juillet 2014

 

 

Le projet BeeMusic a pour objectif de décrire la musique à grande échelle (plusieurs millions de titres musicaux). Dans ce projet, IRCAM est en charge du développement des technologies de description du contenu audio (reconnaissance automatique du genre et de l’humeur musicale, identification audio par fingerprint …) pour des grands catalogues musicaux.

 

Description du poste 201406BMRESA:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies de reconnaissance automatique de genre et humeur.

 

Le/la chercheur(se) sera en charge de la recherche et des développements concernant la mise à l’échelle des technologies d’apprentissage supervisée (passage à l’échelle des algorithmes GMM, PCA ou SVM), afin de permettre leur application à des millions de données. Il/elle sera en charge de l’application de ces technologies pour l’entrainement de modèles de genre et humeur musicale ainsi que de leur application à des grands catalogues.

 

Profil requis:      

* Très grande expérience en algorithmes d’apprentissage automatique et en techniques d’indexation (le candidat doit avoir un PHD dans un de ces domaines)

* Expérience de passage à l’échelle des ces algorithmes

* Très bonne connaissance de la programmation Matlab, connaissance de la programmation C/C++

* Bonne connaissance du traitement du signal (analyse spectrale, extraction de descripteurs audio, estimation de paramètres) 

* Bonne Connaissance des environnements Linux, Windows et Mac OS-X.

* Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participera aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Description du poste 201406BMRESB:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies d’identification audio par fingerprint.

 

Le/la chercheur(se) sera en charge de la recherche et du développement de la technologie d’identification audio par fingerprint robuste aux dégradations sonores (capture du son à travers un téléphone mobile en environnement bruité) et des algorithmes de recherche des fingerprints dans une très grande base de données (plusieurs millions de titres musicaux).

 

Profil requis:      

*Très bonne connaissance en traitement du signal et en conception d’audio fingerprint (le candidat doit avoir un PHD dans un de ces domaines)

*Très bonne connaissance en techniques d’indexation et de systèmes distribués (hash-table, Hadoop, SOLR)

* Très bonne connaissance de la programmation Matlab, connaissance de la programmation Python et Java

*Bonne connaissance des environnements Linux, Windows et Mac OS-X.

*Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participeront aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Présentation de l’Ircam:

 

L'Ircam est une association à but non lucratif, associée au Centre National d'Art et de Culture Georges Pompidou, dont les missions comprennent des activités de recherche, de création et de pédagogie autour de la musique du XXème siècle et de ses relations avec les sciences et technologies. Au sein de son département R&D, des équipes spécialisées mènent des travaux de recherche et de développement informatique dans les domaines de l'acoustique, du traitement des signaux sonores, des technologies d’interaction, de l’informatique musicale et de la musicologie. L'Ircam est situé au centre de Paris à proximité du Centre Georges Pompidou au 1, Place Stravinsky 75004 Paris.

 

 

Salaire:

Selon formation et expérience professionnelle

 

 

Candidatures:

Prière d'envoyer une lettre de motivation avec la référence 201406BMRESA ou 201406BMRESB  et un CV détaillant le niveau d'expérience/expertise dans les domaines mentionnés ci-dessus (ainsi que tout autre information pertinente) à peeters_a_t_ircam dot fr avec copie à

vinet_a_t_ircam dot fr, roebel_at_ircam_dot_fr

 

Top

6-11(2014-05-29) (W/M) researcher positions at IRCAM for Large-Scale Audio Indexing, Paris

ENGLISH VERSION:

 

 

Positions: 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing

Starting:     August 18, 2014

Duration:     12 months

Deadline for application:   July, 1st, 2014

 

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic genre or mood recognition, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMRESA:

 

For this project IRCAM is looking for a researcher for the development of the technologies of automatic genre and mood recognition.

 

The hired Researcher will be in charge of the research and the development of scalable technologies for supervised learning (i.e. scaling GMM, PCA or SVM algorithms) to be applicable to millions of annotated data.

He/she will then be in charge of the application of the developed technologies for the training of large-scale music genre and music mood models and their application to large-scale music catalogues.

 

Required profile:

* High skill in audio indexing and data mining (the candidate must hold a PHD in one of these fields)

* Previous experience into scalable machine-learning models

* High-skill in Matlab programming, skills in C/C++ programming

* Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)

* Good knowledge of Linux, Windows, MacOS environments

* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Position description 201406BMRESB:

 

For this project IRCAM is looking for a researcher for the development of the technologies of audio fingerprint.

 

The hired Researcher will be in charge of the research and the development of audio fingerprint technologies that are robust to audio degradations (sound capture through mobile-phones in noisy environment) and fingerprint search algorithms in large-scale database (millions of music titles).

 

Required profile:

* High skill in audio signal processing and audio fingerprint design (the candidate must hold a PHD in one of these fields)

* High skill in indexing technologies and distributed computing (hash-table, Hadoop, SOLR)

* High-skill in Matlab programming, skills in Python and Java programming

* Good knowledge of Linux, Windows, MacOS environments

* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Introduction to IRCAM:

 

IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

 

Salary:

According to background and experience

 

 

Applications:

Please send an application letter with the reference 201406BMRESA or 201406BMRESB together with your resume and any suitable information addressing the above issues preferably by email to: peeters_a_t_ircam dot fr with cc to vinet_a_t_ircam dot fr, roebel_at_ircam_dot_fr

 

VERSION FRANCAISE:

 

 

Offre d’emploi : 2 postes de chercheur (H/F) à l’IRCAM pour technologies d’indexation audio à grande échelle

Démarrage : 18 Aout 2014

Durée : 12 mois

Date limite pour candidature: 1er juillet 2014

 

 

Le projet BeeMusic a pour objectif de décrire la musique à grande échelle (plusieurs millions de titres musicaux). Dans ce projet, IRCAM est en charge du développement des technologies de description du contenu audio (reconnaissance automatique du genre et de l’humeur musicale, identification audio par fingerprint …) pour des grands catalogues musicaux.

 

Description du poste 201406BMRESA:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies de reconnaissance automatique de genre et humeur.

 

Le/la chercheur(se) sera en charge de la recherche et des développements concernant la mise à l’échelle des technologies d’apprentissage supervisée (passage à l’échelle des algorithmes GMM, PCA ou SVM), afin de permettre leur application à des millions de données. Il/elle sera en charge de l’application de ces technologies pour l’entrainement de modèles de genre et humeur musicale ainsi que de leur application à des grands catalogues.

 

Profil requis:      

* Très grande expérience en algorithmes d’apprentissage automatique et en techniques d’indexation (le candidat doit avoir un PHD dans un de ces domaines)

* Expérience de passage à l’échelle des ces algorithmes

* Très bonne connaissance de la programmation Matlab, connaissance de la programmation C/C++

* Bonne connaissance du traitement du signal (analyse spectrale, extraction de descripteurs audio, estimation de paramètres) 

* Bonne Connaissance des environnements Linux, Windows et Mac OS-X.

* Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participera aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Description du poste 201406BMRESB:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies d’identification audio par fingerprint.

 

Le/la chercheur(se) sera en charge de la recherche et du développement de la technologie d’identification audio par fingerprint robuste aux dégradations sonores (capture du son à travers un téléphone mobile en environnement bruité) et des algorithmes de recherche des fingerprints dans une très grande base de données (plusieurs millions de titres musicaux).

 

Profil requis:      

*Très bonne connaissance en traitement du signal et en conception d’audio fingerprint (le candidat doit avoir un PHD dans un de ces domaines)

*Très bonne connaissance en techniques d’indexation et de systèmes distribués (hash-table, Hadoop, SOLR)

* Très bonne connaissance de la programmation Matlab, connaissance de la programmation Python et Java

*Bonne connaissance des environnements Linux, Windows et Mac OS-X.

*Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participeront aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Présentation de l’Ircam:

 

L'Ircam est une association à but non lucratif, associée au Centre National d'Art et de Culture Georges Pompidou, dont les missions comprennent des activités de recherche, de création et de pédagogie autour de la musique du XXème siècle et de ses relations avec les sciences et technologies. Au sein de son département R&D, des équipes spécialisées mènent des travaux de recherche et de développement informatique dans les domaines de l'acoustique, du traitement des signaux sonores, des technologies d’interaction, de l’informatique musicale et de la musicologie. L'Ircam est situé au centre de Paris à proximité du Centre Georges Pompidou au 1, Place Stravinsky 75004 Paris.

 

 

Salaire:

Selon formation et expérience professionnelle

 

 

Candidatures:

Prière d'envoyer une lettre de motivation avec la référence 201406BMRESA ou 201406BMRESB  et un CV détaillant le niveau d'expérience/expertise dans les domaines mentionnés ci-dessus (ainsi que tout autre information pertinente) à peeters_a_t_ircam dot fr avec copie à

vinet_a_t_ircam dot fr, roebel_at_ircam_dot_fr

 

Top

6-12(2014-05-30) Research Assistant/Associate in Statistical Spoken Dialogue Systems

Research Assistant/Associate in Statistical Spoken Dialogue Systems (Fixed Term)


Applications are invited for a research position in statistical spoken dialogue systems in the Dialogue Systems Group at the Cambridge University Engineering Department. The position is sponsored by Toshiba Cambridge Research Laboratory.

The main focus of the work will be on the development of techniques and algorithms for implementing robust statistical dialogue systems which can support conversations ranging over very wide, potentially open, domains. The work will extend existing techniques for belief tracking and decision making by distributing classifiers and policies over the corresponding ontology.

The successful candidate will have good mathematical skills and be familiar with machine learning. S/he will have the ability to design tests and experiments and to design and develop new methods and algorithms to address research objectives and find solutions. Preference will be given to candidates with specific understanding of Bayesian methods and reinforcement learning, experience in spoken dialogue systems, and strong software engineering skills. Candidates with good communication and writing skills and knowledge of semantic web, OWL and ontologies will be at an advantage. Candidates should have, or will shortly have, a PhD in an area related to speech technology. However, candidates with comparable research experience are also encouraged to apply.

This is an exciting opportunity to join one of the leading groups in statistical speech and language processing. Cambridge provides excellent research facilities and there are extensive opportunities for collaboration, visits and attending conferences.

Salary Ranges: Research Assistant: £24,289 - £27,318 Research Associate: £28,132 - £36,661

Fixed-term: The funds for this post are available for 24 months in the first instance.

The post is based in Central Cambridge, Cambridge, UK.

Once an offer of employment has been accepted, the successful candidate will be required to undergo a health assessment.

To apply online for this vacancy, please click on the 'Apply' link below. This will route you to the University's Web Recruitment System, where you will need to register an account (if you have not already) and log in before completing the online application form.

Please ensure that you upload your Curriculum Vitae (CV), a statement of research interests and a covering letter in the Upload section of the online application. If you upload any additional documents which have not been requested, we will not be able to consider these as part of your application. Please submit your application by midnight on the closing date.

If you have any questions about this vacancy or the application process, please contact: Elisabeth Barlow, email Elisabeth.Barlow@admin.cam.ac.uk. (Tel +44 01223 765692)

Please quote reference NM03182 on your application and in any correspondence about this vacancy.

The University values diversity and is committed to equality of opportunity.

The University has a responsibility to ensure that all employees are eligible to live and work in the UK

Apply online link:

http://hrsystems.admin.cam.ac.uk/recruit-ui/apply/NM03182

 

Top

6-13(2014-06-05) Thesis grant in Neurophysiological Investigation of prosodic cues.... Univ. Toulouse II -III, F

Subject : « NEUROPROS- Neurophysiological Investigation of prosodic cues processing by monolingual French and Spanish speakers, and bilingual speakers (French-Occitan and French-Spanish) »

Supervisors: Barbara Köpke, Denis Fize, Corine Astésano, Radouane El Yagoubi

Host Laboratories:

U.R.I Octogone-Lordat (EA 4156), Université de Toulouse II

CERCO (UMR 5549), Université Paul Sabatier - Toulouse III

Discipline: Linguistics

Doctoral School: Comportement, Langage, Education, Socialisation, Cognition (CLESCO)

Scientific description of the research project:

The project falls within an eminently interdisciplinary approach (linguistics, cognitive

neuropsychology and neurosciences) aiming at studying prosodic cues processing by

monolingual and bilingual French speakers. French is a language with so-called post-lexical,

non-distinctive accentuation, contrary to languages like Spanish, Catalan or Occitan where

accentual patterns are represented in the lexical entry. These prosodic characteristics have

lead to consider French as a ‘language without accent’ (Rossi, 1980), which makes it difficult

for this language to be integrated in models of speech processing (Cutler et al, 1997) since

they are mostly based on the metrical and accentual characteristics of languages (Cutler &

Norris, 1988). Also, these prosodic characteristics are said to be responsible for some degree

of ‘stress deafness’ by French listeners in foreign languages (Dupoux et al, 1997, inter alia).

However, if one considers the French accentual system in all its complexity, taking into

account the interaction between the primary final accent and the secondary initial accent in

the marking of prosodic constituents (Di Cristo, 2000), it becomes possible to postulate a role

of French accentuation in speech segmentation and lexical access strategies (Bagou &

Frauenfelder, 2006). More particularly, the Initial Accent seems to play a predominant role in

the marking of prosodic constituents in French (Astésano et al, 2007) and it is clearly

perceived by naïve listeners (Astésano et al, 2012). Recent neuroimaging studies (EEG)

indicate that metric incongruity slows lexical access in French (Magne et al, 2007). More

recently, we showed in a MisMatch Negativity paradigm that French listeners can readily

discriminate stress patterns in French and that the Initial Accent is encoded in long-term

memory at the level of the lexical word in French (Aguilera et al, 2014).

It is now necessary to consolidate these results by extending our investigations to other EEG

paradigms and by adapting the protocols to fMRI, in order to more precisely describe the

neural substrates and the temporal dynamics of prosodic cues processing in French.

Furthermore, these processing strategies have been observed on monolingual speakers only.

Comparing the linguistic strategies of monolingual and bilingual speakers (French, Spanish

and/or Catalan monolinguals, French/Occitan – French/Spanish or French/Catalan bilinguals)

will not only allow us to considerably enrich our comprehension of lexical access

mechanisms in these languages with different prosodic systems, but also to observe the

influence of the use of several languages with different stress patterns on the perception and

processing of prosodic cues.

The selected candidate will benefit from a stimulating scientific environment: (s)he will

integrate the Interdisciplinary Research Unit Octogone-Lordat (Toulouse II :

http://octogone.univ-tlse2.fr/) and will be co-supervised by Prof. Barbara Köpke, a specialist

on bilingualism, and by Dr. Denis Fize at the Research Centre on Brain and Cognition

(CERCO, Toulouse III), a researcher in Neurosciences and neuroimaging specialist. The

research will take place in the frame of a research group managed by Dr. Corine Astésano, a

specialist in prosody, and with Dr. Radouane El Yagoubi, a specialist of cognitive

neurosciences and psychology. The project is also connected to the French ANR research

project PhonIACog (http://aune.lpl-aix.fr/~phoniacog/) managed by Dr. Corine Astésano.

Bibliography

Aguilera, M. ; El Yagoubi, R. ; Espesser, R. ; Astésano, C. (2014). Event Related Potential investigation of Initial Accent

processing in French. Speech Prosody 2014, Dublin, U.K., May 20-23 2014 : 383-387.

Astésano, C.; Bard, E.; Turk, A. (2007) Structural influences on Initial Accent placement in French. Language and Speech,

50 (3), 423-446.

Astésano, C.; Bertrand, R.; Espesser, R.; Nguyen, N. (2012). Perception des frontières et des proéminences en français. JEPTALN-

RECITAL 2012, Grenoble, 4-8 juin 2012: 353-360.

Bagou, O., & Frauenfelder, U. H. (2006). Stratégie de segmentation prosodique: rôle des proéminences initiales et finales

dans l'acquisition d'une langue artificielle. Proceedings of the XXVIèmes Journées d'Etude sur la Parole, 571-574.

Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental

Psychology: Human perception and performance, 14(1), 113.

Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review.

Language and speech, 40(2), 141-201.

Di Cristo, A. (2000). Vers une modélisation de l'accentuation du français (seconde partie). Journal of French Language

Studies, 10(01), 27-44.

Dupoux, E., Pallier, C., Sebastian, N., & Mehler, J. (1997). A destressing “deafness” in French?. Journal of Memory and

Language, 36(3), 406-421.

Magne, C.; Astésano, C.; Aramaki, M.; Ystad, S.; Kronland-Martinet, R.; Besson, M. (2007) Influence of Syllabic

Lengthening on Semantic Processing in Spoken French: Behavioral and Electrophysiological Evidence. Cerebral Cortex

2007, 17(11), 2659-2668. doi: 10.1093/cercor/bhl174.

Rossi, M. (1980). Le français, langue sans accent?. Studia Phonetica Montréal, 15, 13-51.

Required skills:

- Master in Linguistics, cognitive sciences, neuropsychology or equivalent

- Experience in experimental phonetics and/or linguistics, psycholinguistics,

neurolinguistics

- Skills in signal processing (speech, EEG, fMRI) required, and dedication to the

development of these skills is essential

- Experimental skills wished, as well as a yearning for contact with participants and

motivation for the recruitment of participants

- Autonomy and motivation for learning new skills

- Good knowledge of French and English; knowledge of Spanish, Catalan, Occitan an

asset.

Salary:

- 1 684.93monthly gross (1 368net), 3 year contract

Calendar:

- Sending of applications: 27th june 2014

- Audition of selected candidates: 3rd july 2014

- Start of contract: 1rst october 2014

Applications must be sent to Corine Astésano (corine.astesano at univ-tlse2.fr) and will

include:

- A detailed CV, with list of publications if applicable

- A copy of grades for the Master’s degree

- A summary of the Master’s dissertation and a pdf file of the Master’s dissertation

- A cover letter / letter of interest and/or scientific project (1 page max.)

- The names and email addresses of 2 referent scientific personalities/ supervisors.

Top

6-14(2014-06-012) 2 PhD scholarships at Italian Institute of Technology, Genova, Italy

 

1

 

1. Acoustic-articulatory modeling for automatic speech recognition

Tutors: Leonardo Badino, Lorenzo Rosasco, Luciano Fadiga

Department: Robotics Brain and Cognitive Sciences (Italian Istitute of Technology), Genova, Italy

http://www.iit.it/rbcs

Description: State-of-the art Automatic Speech Recognition (ASR) systems produce remarkable results in some scenarios but still lags behind human level performance in several real usage scenarios and often perform poorly whenever the type of acoustic noise, the speaker’s accent and speaking style are 'unknown' to the system, i.e., are not sufficiently covered in the data used to train the ASR system.

The goal of the present theme is to improve ASR accuracy by learning representations of speech that combine the acoustic and the (vocal tract) articulatory domain as opposed to purely acoustic representations, which only consider the surface level of speech (i.e., speech acoustics) and ignore its causes (the vocal tract movements). Although in real usage settings the vocal tract cannot be observed during recognition it is still possible to exploit the articulatory representations of speech where phonetic targets (i.e., the articulatory targets necessary to produce a given sound) are largely invariant (e.g., to speaker variability) and complex (in the acoustic domain) speech phenomena have simple descriptions.

Joint acoustic-articulatory modeling will be applied in two different ASR training settings: a typical supervised machine learning setting where phonetic transcriptions of the training utterances are provided by human experts, and a weakly supervised machine learning setting where much sparser and less informative labels (e.g., word-level rather than phone level labels) are available.

Requirements: The successful candidate will have a degree in computer science, bioengineering, physics or related disciplines, and a background in machine learning. Interest in neuroscience.

Reference: King, S., Frankel, J., Livescu, K., McDermott, E., Richmond, K., Wester, M. (2007). 'Speech production knowledge in automatic speech recognition'. Journal of the Acoustical Society of America, vol. 121(2), pp. 723-742.

Contacts: leonardo.badino@iit.it, lorosasco@mit.edu, luciano.fadiga@iit.it 2

2. Speech production for automatic speech recognition in human–robot verbal interaction

Tutors: Giorgio Metta, Leonardo Badino, Luciano Fadiga

Department: iCub Facility (Istituto Italiano di Tecnologia), Genova, Italy

http://www.iit.it/iCub

Description: State-of-the art Automatic Speech Recognition (ASR) systems produce remarkable results in partially controlled scenarios but still lags behind human level performance in unconstrained real usage situations and perform poorly whenever the type of acoustic noise, the speaker’s accent and speaking style are 'unknown' to the system, i.e., are not sufficiently covered in the data used to train the ASR system. The goal of this PhD theme is to attack the problem of ASR in a human to robot conversation. To this aim, we will create a robust Key Phrases Recognition system where commands delivered by the user to the robot (i.e., the key phrases) have to be recognized in unconstrained utterances (i.e., utterances with hesitations, disfluencies, additional out-of-task words, etc.), in the challenging conditions of human-robot verbal interaction where speech is typically distant (to the robot) and noisy. To increase the robustness of the ASR, articulatory information will be integrated into a Deep Neural Network – Hidden Markov Model system.

This work will be carried out and tested on the iCub platform.

Requirements: background in computer science, bioengineering, computer engineering, physics or related disciplines. Solid programming skills in C++, Matlab, GPU (CUDA) are a plus. Attitude for problem solving. Interests in understanding/learning basic biology.

Reference: Barker, J., Vincent, E., Ma, N., Christensen, H., Green, P., (2013) 'The PASCAL CHiME Speech Separation and Recognition Challenge'. Computer Speech and Language, vol. 27(3), pp. 621-633.

Contacts: leonardo.badino@iit.it, giorgio.metta@iit.it, luciano.fadiga@iit.it

Additional information

Starting date: November 2014.

PhD scholarship: the scholarship will cover all fees with a gross salary of 16500 euros/year (≈1250 euros/month after taxes)

Top

6-15(2014-06-11) Post-doc position at IMMI-CNRS

Post-doc position at IMMI-CNRS

A post-doctoral position is proposed at IMMI-CNRS (Orsay, France - http://www.immi-labs.org/). IMMI is an International Joint Research CNRS Unit (UMI) in the field of Multimedia and Multilingual Document Processing. It gathers three contributing partners: LIMSI-CNRS, RWTH Aachen and KIT (Karlsruhe Institute of Technology).

Context of the project

The project relies on an experimental platform for online monitoring of social media and information streams, with self-adaptive properties, in order to detect, collect, process, categorize, and analyze  multilingual streams.  The platform includes  advanced linguistic analysis, discourse analysis, extraction of entities and terminology, topic detection,  translation and the project includes studies on unsupervised and cross-lingual adaptation.

Requirements and objectives

A PhD in a field related to the project (translation, natural language processing or machine learning) is required. The candidate will perform research in the framework mentioned above, and will supervize collection and annotation of the data. Salary will follow CNRS standard rules for contractual researchers, according to the experience of the candidate.

Contacts

  • Gilles Adda (adda [at] immi-labs.org)

Agenda

  • Opening date: August 2014
  • Application deadline: Open until filled
  • Duration: 24 months

   

Top

6-16(2014-06-12) Postdoc position on conversation summarization, Univ.Aix-Marseille France
Postdoc position on conversation summarization
(Full time, one year - Closing date for applications 2014-07-01)

We are looking for an outstanding research scientist to join the
'SENSEI' european project (http://www.sensei-conversation.eu/). You
will contribute to conversation analysis summarization research to
allow the exploitation of large quantity of comments in social
media and spoken conversations.

Job description:
You will contribute to the design and development of speech and text
summarization technologies for conversational data such as social
media comments and tweets. There will be three components to the
system: linguistic analysis of the conversations, content selection
and aggregation, and generation of the summaries (text or other
media). The approach is expected to make use of recent machine
learning advances such as deep learning, and focus on limiting the
quantity of supervision needed. The prototype will be evaluated by
end-user professionals in ecological conditions.

Profile:
The applicant must hold a PhD degree, preferably in the field of
natural language processing or machine learning. He/she should:
- Be proficient in one of Java / C++ programming and python or php scripting
- Have experience with developing efficient NLP / machine learning systems.
- Be keen on researching the literature and writing papers
- Enjoy team work and be autonomous

Location:
You will work at the LIF computer science lab at Aix-Marseille
University in France, at the Luminy campus next to the
calanques.

Dates:
Interviews will be held in July 2014, the Postdoc will start in
september / october 2014 and last one year.

Contact:
Enquiries and applications should be sent to Benoit Favre:
benoit.favre@lif.univ-mrs.fr

SENSEI project page: http://www.sensei-conversation.eu/
Top

6-17(2014-06-18) Two positions at the University of Cambridge, UK
 
Top

6-18(2014-06-21) 3 PhD Positions in Speech Processing at LIG/Grenoble (France)

3 PhD Positions in Speech Processing at LIG/Grenoble (France)

 
The Study Group for Machine Translation and Automated Processing of Languages and Speech (GETALP) of LIG (Laboratory of Informatics of Grenoble) offers 3 PhD Positions in Speech Processing. We are looking for outstanding young research scientists to join the group on several projects involving speech processing.
 
Opened Positions
 
  1. PhD  / Automatic speech recognition and machine assisted speech annotation for African Languages
You will work in the context of the ALFFA project which is really interdisciplinary since it not only gathers technology experts (LIG, LIA, VOXYGEN) but also includes fieldwork linguists/phoneticians (DDL). The PhD will focus on analysing the capabilities of existing automatic speech processing systems to investigate phonetic characteristics of languages or annotate speech (especially on mobile devices: tablets, glasses, etc) to provide an innovative digital assistant to the fieldwork linguist.
Start : Fall 2014
Duration : 36 months
Particular aspect : co-supervision with DDL lab in Lyon
Project Web Site : http://alffa.imag.fr
 

  1. PhD / Speech interaction for socio-affective ubiquitous agents and robots in ambient assisted living environments

You will work on a research and development project (CASSIE) involving academic and industrial stakeholders of spoken dialog, assistive technologies, affectives sciences and social robotics. The PhD objective is to design a spoken dialogue system that will interact with a user in her/his home through an ubiquitous (physical and/or virtual) and personalized agent. This dialogue system will be corpus based, with iterative machine learning approach hydride with boostrap expert knowledge (observed from “intelligent” annotations) from spontaneous and ecological data collected in real or quasi-real environment (Smart Home) and situation (real scenario). The system will focus on the socio-affective dimensions of the interaction (socio-affective prosody, paralinguistic events, imitation, synchrony etc), especially the dynamics (timing) of the dialog… One aspect of this PhD will also focus on  the comparison of the same character implemented in robot versus virtual agent for interaction (epathy aspects, etc.).

 

Start : Fall 2014

Duration : 36 months

Contact : Veronique.Auberge@imag.fr & Benjamin.Lecouteux@imag.fr (+Laurent.Besacier@imag.fr)


3. PhD / Context-aware spoken dialogue in ambient assisted living environments

  1. You will work on a research and development project (CASSIE) involving academic and industrial stakeholders of spoken dialog, assistive technologies and social robotics. The PhD objective is to make a social cyber-physical agent 'aware'  of its environment by sensors and/or connected objects. This contextual information will drive the system interaction (natural language understanding and dialog). The heart of the research will be to build probabilistic and logical models for multimodal situation analysis and understanding in a domestic and multilingual context. For the experimental development and validation, the research will benefit from the fully-equipped LIG smart home (DOMUS).
    Start : Fall 2014
    Duration : 36 months (PhD)
 
Profiles The applicants must hold a Master degree in Computational Linguistics, Computing sciences or Cognitive Sciences preferably with experience in the fields of speech processing and/or natural language processing and/or machine learning. Good background in programming will also be required. 
He/she will also be involved in experimenting the technology with human participants being either French or English speakers. For this reason good English level is required as well as a good command of French. Finally effective communication skills in English, both written and verbal are mandatory.
 
Location Grenoble is a high-tech city with 4 universities. It is located at the heart of the Alps, in outstanding scientific and natural surroundings. It is 3h by train from Paris ; 2h from Geneva ; 1h from Lyon ; 2h from Torino and is less than 1h from Lyon international airport.
 
Research Group Website : http://getalp.imag.fr 
 
Dates Interviews will be held in July 2014 (until September 2014 if needed). Meetings during Interspeech 2014 in SIngapore can be also organized.
Top

6-19(2014-06-22) Two PhD student positions in phonetics or speech science, Saarland University, Saarbrücken, Germany

Two PhD student positions in phonetics or speech science, Saarland
University, Saarbrücken, Germany

Closing date 5 July 2014 (open until filled), positions starting 1
October 2014

http://www.coli.uni-saarland.de/~moebius/page.php?id=jobs

Top

6-20(2014-06-25) RESEARCH FACILITATOR IN SPEECH TECHNOLOGY - CLOUDCAST NETWORK, Univ. Sheffield, UK

RESEARCH FACILITATOR IN SPEECH TECHNOLOGY - CLOUDCAST NETWORK

Applications are invited for a position as Research Facilitator in the Speech and Hearing (SPandH) research group and the Centre for Assistive Technology and Connected Healthcare at Sheffield University to work on CloudCAST, a recently-awarded international network funded by the Leverhulme Trust and coordinated by Professor Phil Green. The vision of CloudCAST is

'.. to provide a way in which rapid developments in machine learning and speech technology can be placed in the hands of professionals who deal with speech problems: therapists, pathologists, teachers, assistive technology experts.. We intend to do this by creating a free-of-charge, remotely-located, internet-based resource 'in the cloud' which will provide a set of software tools for personalised speech recognition, diagnosis, interactive spoken language learning and the like. We will provide interfaces which make the tools easy to use for people who are not speech technology experts and create a self-sustaining CloudCAST community to manage future development.'

CloudCAST involves collaboration with


The Facilitator will be responsible for the software engineering required to build the CloudCAST resource. This involves taking algorithms and data which have been developed for research and knitting them into a form that

  •     is accessible to people who are not experts in speech technology,
  •     has a uniform look-and-feel,
  •     allows for amendments and additions,
  •     encourages others to contribute
  •     is available over the internet ('resides in the cloud').


The Facilitator may also become involved with pilot research studies using the resource, and will be responsible for organising and participating in an extensive series of visits between the 4 sites involved.

A good degree in Computer Science, Software Engineering, Mathematics or a closely-related subject. Is required for all applicants.  An appointment at Grade 7 will require a PhD. in speech technology or equivalent industrial experience. Applicants should have knowledge of speech technology and software engineering skills. The supporting documentation gives details. You can view the documentation by clicking on About the Job and About the University located near the top of your screen.

This is a full-time post, available now.

For supporting documentation and details of how to apply, visit

http://www.jobs.ac.uk/job/AIW424/research-facilitator-in-speech-technology-cloudcast-network/

Informal enquiries to Professor Phil Green, p.green@shef.ac.uk

Closing Date: 30th June 2014.

 

Top

6-21(2014-06-27) Research professorship at KU Leuven ESAT/PSI – Audio and/or Speech Processing
Research professorship at KU Leuven ESAT/PSI – Audio and/or Speech Processing The division ESAT/PSI (Processing Speech & Images, http://www.esat.kuleuven.be/psi) performs 
fundamental and applied research in the broad field of audio-visual information processing. The 
research is multidisciplinary and integrates expertise from engineering, physics, mathematics, 
medicine, linguistics, machine learning and computational science. New methods are developed and 
validated in computer vision, medical imaging, speech and audio processing and other application 
fields. PSI is one of the leading labs in its areas of research. The division is part of the EE 
department (ESAT) of the University of Leuven, the largest and highest-ranked university in 
Belgium. Leuven lies about 25 km east of Brussels and 15 minutes from Brussels airport by train.

To strengthen and widen its research domain, the PSI division is looking for a research 
professor in the area of audio and/or speech processing. The focus is on the interpretation of 
large amounts of these data, possibly in combination with other sensorial data (eg. images). We 
live in an environment where sound is ubiquitous and the interpretation of speech and other 
sounds is crucial for safety, for communication, for understanding of our environment, ... In 
many applications the ability of a computer to achieve human-like performance in this respect is 
highly desired and worldwide a lot of research effort is spent to achieve this goal. We want to 
expand our own research lines by hiring a new professor to enlarge the existing group with new 
projects and researchers exploring new ideas and paradigms that advance the state-of-the-art in 
this area.

The candidate must be an internationally recognized researcher, with a strong publication 
record. At the start of the mandate he/she must have at least 3 years of experience in 
scientific research as a postdoc, with hands-on experience in supervising PhD students. 
Experience with successful project grant writing is a definite plus. He/she also needs to 
possess didactic qualities. The position is primarily research-oriented, but the applicants must 
be prepared and are also expected to undertake limited teaching assignments. Applicants should 
be prepared to learn Dutch.

Entering research professors are appointed with a rank depending on their qualifications. Young 
researchers with at least 3 years and less than 7 full years of postdoctoral experience at the 
time of the appointment are typically offered a Tenure Track position, without excluding a 
higher academic position. Advanced researchers with at least 7 years of postdoctoral experience 
at the time of appointment are typically hired as a full professor, without excluding a Tenure 
Track position.

Applications should include a CV (incl. a complete publication list) and an abstract (1-2 pages) 
of a research proposal for the coming five years. They should be submitted by e-mail as soon as 
possible but ultimately before August 31st, 2014 to

Katholieke Universiteit Leuven
Department of Electrical eEgineering - ESAT
Center for Processing Speech and Images - PSI
Kasteelpark Arenberg 10 bus 2441
3001 Heverlee, Belgium
E-Mail: patrick dot wambacq at esat dot kuleuven dot be

Applicants may be invited to give a seminar to the staff of the research division ESAT/PSI. 
Subsequently, promising candidates will be asked to participate to the university-wide selection 
procedure for research professorships. Each year the KU Leuven appoints a number of research 
professors. These positions are financed by a university fund called 'BOF' (Bijzonder 
Onderzoeksfonds) that is funded by the Flemish Government.
Top

6-22(2014-06-26) Two 2-year post-doctoral position, Univ. Aix-Marseille, FR

Call for a two 2-year post-doctoral position

Laboratoire Parole et Langage (UMR 7309 Aix-Marseille Université / CNRS)

Aix-en-Provence, France

Principal investigator: Serge Pinto, Ph.D

 

Dysarthria in Parkinson’s disease: Lusophony vs. Francophony comparison (FraLusoPark)

Parkinson’s disease (PD) is classically characterized by a symptomatic triad that includes rest tremor, akinesia and hypertonia and although the motor expression of the symptoms involves mainly the limbs, the muscles implicated in speech production are also subject to specific dysfunctions. Motor speech disorders, so-called dysarthria, can thus be developed by PD patients. The main objective of our project is to evaluate the physiological parameters (acoustics), perceptual markers (intelligibility) and psychosocial impact of dysarthric speech in PD, in the context of language (French vs. Portuguese) modulations. PD patients will be enrolled in the study in Aix-en-Provence, France and Lisbon, Portugal. The proposed position refers to the data acquisition and analysis in the French site (Aix-en-Provence).

In order to achieve the goals of this project, 1 post-doctoral position is proposed to a young and dynamic researcher. The candidate, who should have experience in speech sciences research (acoustics, perception, prosody), will participate to the acquisition and analysis of speech data.

This project benefits from a bilateral ANR/FCT financial support (for the French side: project n° ANR-13-ISH2-0001-01).

 

Interested candidates should contact the principal investigator by sending:

-          a detailed CV

-          a letter of motivation

-          letters of recommendation (optional)

 

Duration of the position: 2 years (full-time)

Monthly salary: 2 000 € net

Application deadline: 2014, September 30th

Starting date: 2014, November 1st

For supplementary information and applications: serge.pinto@lpl-aix.fr



Top

6-23(2014-07-01) Postdoc position in Speech Synthesis, Saarland University, Saarbrücken, Germany

*Postdoc position in Speech Synthesis* (full-time, 2 years
from October 2014, extendable) at Saarland University, Saarbrücken, Germany

Please link to
http://www.coli.uni-saarland.de/~steiner/job_advertisement.pdf

Top

6-24(2014-07-15) Contrat doctoral ; Réduction phonétique en français, Univ. Paris 3
Le LabEx EFL (Empirical Foundations of Linguistics) offre un contrat doctoral de 3 ans.

REDUCTION PHONETIQUE EN FRANCAIS

Le sujet proposé concerne la réduction phonétique en parole continue, la variabilité intra- et interlocuteur ainsi que les liens entre réduction phonétique, prosodie et intelligibilité de la parole.  

 Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Martine Adda-Decker et Cécile Fougeron, Directeurs de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle

 Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités se stature internationale, des formations, des colloques et des écoles d'été.

  • Conditions

- avoir une bonne maitrise de la langue française (parlée et écrite).

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

Le/la candidate devra avoir des connaissances et compétences en traitement de données acoustiques et/ ou articulatoires (ultrason, video, EGG…)  . Des connaissances en informatique et en analyses statistiques seraient un plus. 

  • Pièces à joindre pour la candidature
  1. un CV
  2. une lettre de motivation
  3. le mémoire de master 2
  4. une lettre de recommandation 
  5. le nom de deux référents (avec leurs adressse courriel)
Date limite de candidature: 20 septembre 2014
  • Présélection sur dossier et
    Auditions 
    des candidats présélectionnés

Les candidats présélectionnés seront auditionnés fin septembre (entre le 24 et le 30 septembre) sur place ou par visio-conférence.  

Contact: 
Martine Adda-Decker, directeur de recherche CNRS
madda@univ-paris3.fr

Adresse pour la candidature: 
madda@univ-paris3.fr
ILPGA
19 rue des Bernardins
75005 Paris
Université Paris 3
 

Top

6-25(2014-07-16) POST-DOCTORAL POSITION at LPL - AIX-EN-PROVENCE, FRANCE

POST-DOCTORAL POSITION FOR THE PROJECT PhonIACog (LPL - AIX-EN-PROVENCE, FRANCE)
*****************************************************************************************************

We invite applications for a one-year Post‐Doctoral position at the Laboratoire Parole et Langage (LPL, Aix-Marseille Université, CNRS, UMR 7309, France), to work on the project PhonIACog  (- The role of the Initial Accent in prosodic structuring in French - From phonology to speech processing- Main coordinator : Corine Astésano, Université de Toulouse 2).

•    Description
The PhonIACog project is funded by the The French National Research Agency (ANR).

The present project aims at describing the characteristics of the French accentual system in order to bring to light the underlying phonological structure of this language. It addresses the status of the bipolar pattern /IA FA/ (initial accent-final accent), considered as the basic metric pattern in French. We propose to apply the same analyses to different corpora, from laboratory speech to semi-controlled speech and dialogic spontaneous interaction. The production studies will allow us to refine the acoustic-phonetic characterization of IA and FA, with potential application to automatic detection of prosodic cues on large, spontaneous corpora.

More information is available at the project website: http://aune.lpl-aix.fr/~phoniacog/

•    Job description
The post‐doctoral fellow will be mainly involved in data processing. He/she will participate in the acoustic analyses and will then have to implement the statistical analyses planned in the project.

•    Qualifications
A Ph.D. in linguistics (experimental phonetics/prosody) or in computer science and solid competence/experience in statistics and data analysis are required.  Experience in processing and analysis of large speech database is also welcome.

•    Application procedure
Candidates should send a detailed CV with a list of publications, and a cover letter with statement of research interests and details of their experience in data analysis.
Please e-mail documents to: roxane.bertrand@lpl-aix.fr (Roxane Bertrand, Scientific coordinator LPL, Aix-en-Provence, France).

Deadline for submission: September 30, 2014
Expected start date: November 2014 (with some flexibility.)
Length of contact: 12 months
Salary: about €2000/month including health care



Top

6-26(2014-08-10) Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Le Laboratoire d'Informatique d'Avignon (Université d'Avignon, lia.univ-avignon.fr)) et le Laboratoire d'Informatique Fondamentale de Lille (http://www.lifl.fr/) cherchent à pourvoir un poste d'ingénieur recherche et développement.

Dans la cadre du projet ANR MaRDi (Man-Robot Dialogue) il s'agit d'un **CDD de 12 mois** basé à Avignon (avec séjours ponctuels à Lille) axé sur le développement d'une plateforme de dialogue oral. L'architecture de la solution existante, basée sur des approches statistiques état de l'art, devra être complètement repensée afin d'améliorer sa modularité et aussi de permettre son exploitation sous forme de service web afin de développer les approches de collecte de données en ligne (crowdsourcing).

D'excellentes compétences en programmation et architecture logicielle sont attendues. Plus précisément les compétences recherchées incluent :
- Langage de programmation C++ ou Java ;
- Langages de script Python, Perl ;
- Développement Web : HTML/Javascript/CSS/XML, web services (REST...), serveurs d'application (Tomcat, Glassfish...) ;
- Traitement automatique de la langue : connaissances générales sur les systèmes d'interactions vocales (reconnaissance et interprétation de la parole, gestion du dialogue, génération de texte, synthèse de parole...).

Le salaire prévu est de 2400 euros brut / mois et peut varier selon le profil et l'expérience. Le poste est ouvert aux débutants (récents diplômés d'école d'ingénieur ou de Master, spécialités informatique ou 'linguistique - informatique' si couplée avec des connaissances significatives en informatique) mais aussi aux récents docteurs possédant de bonnes capacités en programmation.

Le profil possède une forte dominante développement, toutefois le travail attendu s’inscrit dans les activités de recherche de deux groupes de recherche très actifs et se prête à publications (post-doc possible). En fonction de la qualité de l'ingénieur/docteur recruté, les deux laboratoires bénéficient d'un fond constant de projets permettant d'envisager la prolongation du contrat à l'issue des 12 mois.

Les candidats peuvent envoyer un dossier (cv, motivation et éventuelles lettres de recommandation, en pdf) à fabrice.lefevre-_-à-_-univ-avignon.fr. La date de démarrage souhaitée est **septembre/octobre 2014**. L'offre reste valide jusqu'au recrutement d'un candidat.
==========================================================================

-- 
Fabrice Lefèvre, LIA-CERI-Univ. Avignon
BP 91228, 84911 Avignon Cedex 9, FRANCE
tel 33 (0)4 90 84 35 63/ fax - - - - 01
Top

6-27(2014-08-18) Two postdoc positions in speech processing at the Department of Signal Processing and Acoustics, Aalto University, Finland.

Department of Signal Processing and Acoustics, Aalto University (formerly known as the Helsinki University of Technology), is looking for outstanding candidates for two postdoc positions:

 

 

 

Postdoc position in Computational Modeling of Language Acquisition

The speech technology group (led by Prof. Unto Laine) at Aalto University works on computational modeling of language acquisition, perception and production. The overall goal is to understand how spoken language skills can be acquired by humans or machines through communicative interaction and without supervision. The research in our topic involves cross-disciplinary effort across fields such as machine learning, signal processing, speech processing, linguistics, and cognitive science. The research is funded by the Academy of Finland.

We are currently looking for a postdoc to join our research team to work on our research themes, including:

 

  • pattern discovery from speech

  • articulatory modeling and inversion

  • modeling and methods for autonomous acquisition of lexical, phonetic and grammatical structure from speech input

  • multimodal statistical learning (associative learning between multiple input domains such as speech, articulation and vision).

 

Postdoc: 2 years. Starting date: as soon as possible.

 

Send your application, CV and references directly by email to

D.Sc. (Tech.) Okko Räsänen, okko.rasanen at aalto.fi

 

Postdoc position in Speech Synthesis and Voice Source Analysis

The speech communication technology research group (led by Prof. Paavo Alku) at Aalto University works on interdisciplinary topics aiming at describing, explaining and reproducing communication by speech. The main topics of our research are: analysis and parameterization of speech production, statistical parametric speech synthesis, enhancement of speech quality and intelligibility in mobile phones, robust feature extraction in speech and speaker recognition, occupational voice care and brain functions in speech perception.

We are currently looking for a postdoc to join our research team to work on the team’s research themes, particularly in the following topics:



  • statistical speech synthesis

  • voice source analysis

  • speech intelligibility improvement

 

Postdoc: 1-3 years. Starting date: January 2015

 

Send your application, CV and references directly by email to

Prof. Paavo Alku, paavo.alku at aalto.fi

 

All positions require a relevant doctoral degree in CS or EE, skills for doing excellent research in a group, and outstanding research experience in any of the research themes mentioned above. The candidate is expected to perform high-quality research and assist in supervising PhD students. Please send your application email with the subject line “Aalto post-doc recruitment, autumn 2014”.

In Helsinki you will join the innovative international computational data analysis and ICT community. Among European cities, Helsinki is special in being clean, safe, liberal, Scandinavian, and close to nature, in short, having a high standard of living. English is spoken everywhere. See, e.g., http://www.visitfinland.com/

 

 

Top

6-28(2014-08-15) 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Positions: 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Starting:     September 1st, 2014
Duration:     12 months
Deadline for application:   As Soon As Possible

 

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic genre or mood recognition, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMRESA:

 

For this project IRCAM is looking for a researcher for the development of the technologies of automatic genre and mood recognition.

 

The hired Researcher will be in charge of the research and the development of scalable technologies for supervised learning (i.e. scaling GMM, PCA or SVM algorithms) to be applicable to millions of annotated data.
He/she will then be in charge of the application of the developed technologies for the training of large-scale music genre and music mood models and their application to large-scale music catalogues.
 
Required profile:
* High skill in audio indexing and data mining (the candidate must hold a PHD in one of these fields)
* Previous experience into scalable machine-learning models
* High-skill in Matlab programming, skills in C/C++ programming
* Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Position description 201406BMRESB:

 

For this project IRCAM is looking for a researcher for the development of the technologies of audio fingerprint.

 

The hired Researcher will be in charge of the research and the development of audio fingerprint technologies that are robust to audio degradations (sound capture through mobile-phones in noisy environment) and fingerprint search algorithms in large-scale database (millions of music titles).
 
Required profile:
* High skill in audio signal processing and audio fingerprint design (the candidate must hold a PHD in one of these fields)
* High skill in indexing technologies and distributed computing (hash-table, Hadoop, SOLR)
* High-skill in Matlab programming, skills in Python and Java programming
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Introduction to IRCAM:
 
IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

 

Salary:
According to background and experience

 

 

Applications:
Please send an application letter with the reference 201406BMRESA or 201406BMRESB togetherwith your resume and any suitable information addressing the above issues preferably by email to: peeters_at_ircam dot fr with cc to vinet_at_ircam dot fr, roebel_at_ircam dot fr

 

VERSION FRANCAISE:

 

 

Offre d’emploi : 2 postes de chercheur (H/F) à l’IRCAM pour technologies d’indexation audio à grande échelle
Démarrage : 1er Septembre 2014
Durée : 12 mois
Date limite pour candidature: Le plus rapidement possible

 

 

Le projet BeeMusic a pour objectif de décrire la musique à grande échelle (plusieurs millions de titres musicaux). Dans ce projet, IRCAM est en charge du développement des technologies de description du contenu audio (reconnaissance automatique du genre et de l’humeur musicale, identification audio par fingerprint …) pour des grands catalogues musicaux.

 

Description du poste 201406BMRESA:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies de reconnaissance automatique de genre et humeur.

 

Le/la chercheur(se) sera en charge de la recherche et des développements concernant la mise à l’échelle des technologies d’apprentissage supervisée (passage à l’échelle des algorithmes GMM, PCA ou SVM), afin de permettre leur application à des millions de données. Il/elle sera en charge de l’application de ces technologies pour l’entrainement de modèles de genre et humeur musicale ainsi que de leur application à des grands catalogues.

 

Profil requis:      
* Très grande expérience en algorithmes d’apprentissage automatique et en techniques d’indexation (le candidat doit avoir un PHD dans un de ces domaines)
* Expérience de passage à l’échelle des ces algorithmes
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation C/C++
* Bonne connaissance du traitement du signal (analyse spectrale, extraction de descripteurs audio, estimation de paramètres) 
* Bonne Connaissance des environnements Linux, Windows et Mac OS-X.
* Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participera aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Description du poste 201406BMRESB:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies d’identification audio par fingerprint.

 

Le/la chercheur(se) sera en charge de la recherche et du développement de la technologie d’identification audio par fingerprint robuste aux dégradations sonores (capture du son à travers un téléphone mobile en environnement bruité) et des algorithmes de recherche des fingerprints dans une très grande base de données (plusieurs millions de titres musicaux).

 

Profil requis:      
*Très bonne connaissance en traitement du signal et en conception d’audio fingerprint (le candidat doit avoir un PHD dans un de ces domaines)
*Très bonne connaissance en techniques d’indexation et de systèmes distribués (hash-table, Hadoop, SOLR)
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation Python et Java
*Bonne connaissance des environnements Linux, Windows et Mac OS-X.
*Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participeront aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Présentation de l’Ircam:

 

L'Ircam est une association à but non lucratif, associée au Centre National d'Art et de Culture Georges Pompidou, dont les missions comprennent des activités de recherche, de création et de pédagogie autour de la musique du XXème siècle et de ses relations avec les sciences et technologies. Au sein de son département R&D, des équipes spécialisées mènent des travaux de recherche et de développement informatique dans les domaines de l'acoustique, du traitement des signaux sonores, des technologies d’interaction, de l’informatique musicale et de la musicologie. L'Ircam est situé au centre de Paris à proximité du Centre Georges Pompidou au 1, Place Stravinsky 75004 Paris.

 

 

Salaire:
Selon formation et expérience professionnelle

 

 

Candidatures:
Prière d'envoyer une lettre de motivation avec la référence 201406BMRESA ou 201406BMRESB  et un CV détaillant le niveau d'expérience/expertise dans les domaines mentionnés ci-dessus (ainsi que tout autre information pertinente) à peeters_a_t_ircam dot fr avec copie à
vinet_a_t_ircam dot fr, roebel_at_ircam dot fr

 

 
Top

6-29(2014-08-18) Deux postes en dialogue naturel chez Orange, Lannion

2 offres pour l'équipe NADIA de dialogue en langage naturel de Orange Labs à Lannion.

 

- CDI Ingénieur de recherche data scientist & analyse statistique (f/h) : http://orange.jobs/jobs/offer.do?joid=40611&lang=fr&wmode=light

 

- CDD Post doc Etude du dialogue vocal et tactile avec les objets connectés dans l'environnement domestique, depuis un 'wearable device' : http://orange.jobs/jobs/offer.do?joid=40954&lang=fr&wmode=light

 

Postes ouverts aux non francophones mais pas de description en anglais.

Top

6-30(2014-08-25) Postdoctoral positions at the LIMSI-CNRS lab, Orsay, France

Postdoctoral positions are available at the LIMSI-CNRS lab. The
positions are all one year, with possibilities of extension.  We are
seeking researchers in machine learning and natural language processing.

Topics of interest include, but are not limited to:
- Speech translation
- Bayesian models for natural language processing
- Multilingual topic models
- Word Sense Disambiguation
- Statistical Language Modeling

Candidates must possess a Ph.D. in machine learning or natural
language/speech processing. Please send your CV and/or questions to
Alexandre Allauzen (allauzen@limsi.fr) and François Yvon
(yvon@limsi.fr).

Duration: 12 months, starting Fall or Winter 2014, with a possibility
to extend for an additional 12 months.

Application deadline: Open until filled

The successful candidates will join a dynamic research team working on
various aspects of Statistical Machine Translation and Speech
Processing. For information regarding our activities, see
http://www.limsi.fr/Scientifique/tlp/mt/

About the LIMSI-CNRS:
The LIMSI-CNRS lab is situated at Orsay, a green area 25 km south of
Paris. A suburban train connects Orsay to Paris city center. Detailed
information about the LIMSI lab can be found at http://www.limsi.fr

Top

6-31(2014-08-28) poste d’ingénieur de recherche en CDI à Orange Labs à Rennes, France

Annonce de poste d’ingénieur de recherche en CDI
à Orange Labs à Rennes :

http://orange.jobs/jobs/offer.do?joid=40644&lang=fr

Il s’agit d’un poste de Data Scientist en analyse de données non
structurées : textes ou contenus multimédia de diverses natures (livre,
presse, tweets, photos, podcasts audio et vidéo...).

Top

6-32(2014-09-09) Offre de contrat doctoral par le Laboratoire d'Excellence Fondements empiriques de la linguistique

*** extension de date limite au 5 octobre 2014 ***

Le LabEx EFL (Empirical Foundations of Linguistics) offre un contrat doctoral de 3 ans.

REDUCTION PHONETIQUE EN FRANCAIS

Le sujet proposé concerne la réduction phonétique en parole continue, la variabilité intra- et interlocuteur ainsi que les liens entre réduction phonétique, prosodie et intelligibilité de la parole.  

 Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Martine Adda-Decker et Cécile Fougeron, Directeurs de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle

 Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités se stature internationale, des formations, des colloques et des écoles d'été.

  • Conditions

- avoir une bonne maitrise de la langue française (parlée et écrite).

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

Le/la candidate devra avoir des connaissances et compétences en traitement de données acoustiques et/ ou articulatoires (ultrason, video, EGG…)  . Des connaissances en informatique et en analyses statistiques seraient un plus. 

  • Pièces à joindre pour la candidature
  1. un CV
  2. une lettre de motivation
  3. le mémoire de master 2
  4. une lettre de recommandation 
  5. le nom de deux référents (avec leurs adressse courriel)

Date limite de candidature: (20 septembre 2014 repoussé au **5 OCTOBRE 2014**

  • Présélection sur dossier et

Auditions 

des candidats présélectionnés

Les candidats présélectionnés seront auditionnés mi-octobre sur place ou par visio-conférence.  

Contact: 

Martine Adda-Decker, directeur de recherche CNRS

madda@univ-paris3.fr

Adresse pour la candidature: 

madda@univ-paris3.fr

ILPGA

19 rue des Bernardins

75005 Paris

 

Université Paris 3

 

Labex EFL: http://www.labex-efl.org/

Référence: http://www.labex-efl.org/?q=fr/node/261

 

 

Top

6-33(2014-09-17) Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
University.

LIMSI is a multi-disciplinary research unit that addresses the automatic
processing of human language for a range of tasks.

LIMSI invites applications for 1 one-year Postdoctoral position in
Natural Language Processing.  The topic is as follows:

Dialogue management in a human-machine dialogue system where the
system plays the role of a patient during a medical consultation with
a doctor.

CONTEXT

The postdoctoral fellow will contribute to the following
project:

Patient Genesys (http://www.patient-genesys.com/): in the
framework of continuous medical education, the goal of the project is to
design and develop a framework for virtual patient-doctor consultation.
This is a collaborative project including a hospital and small and
medium enterprises.

JOB REQUIREMENTS

- Ph.D. in Computer Science, Natural Language Processing, Computational
  Linguistics
- Solid programming skills
- Strong publication record
- A good command of French is a plus
- Knowledge of medical terminologies and ontologies is a plus

ADDITIONAL INFORMATION

Net salary: between 2000 and 2400 ??? per month according to experience

Benefits: LIMSI offers a generous benefit package including health
insurance and 44 days vacation pa.

Duration: 12 months, renewable depending on performance and funding
availability

Start date: 1st October 2014

Location: Orsay, greater Paris area, France

TO APPLY

Please send:
* a cover letter
* a curriculum vitae, including a list of publications
* the names and contact information of at least two referees
to both:
     Sophie Rosset (rosset@limsi.fr)
     Anne-Laure (annlor@limsi.fr)
     Pierre Zweigenbaum (pz@limsi.fr)

Application deadline:  October 15th, 2014
Applications will be examined in the following week.

ABOUT LIMSI-CNRS

LIMSI is a laboratory of the French National Center for Research (CNRS),
a leading research institution in Europe.

LIMSI is a multi-disciplinary research unit that covers a number of
fields from thermodynamics to cognition, encompassing fluid mechanics,
energetics, acoustic and voice synthesis, spoken language and text
processing, vision, visualisation and perception, virtual and augmented
reality.

LIMSI hosts about 200 researchers, professors, research support staff
and graduate students. It is located in a green area about 30 minutes
south of Paris.
Top

6-34(2014-09-18) Post-doc position at LORIA (Nancy, France)

 

Post-doc position at LORIA (Nancy, France)

Automatic speech recognition: contextualisation of the language model by dynamic adjustment

Framework of ANR project ContNomina

The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.

In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.

As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the post-doc student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.

Post-doc subject

The language model of the recognition system (n gram learned from a large corpus of text) is available. The problem is to estimate the probability of a new proper name depending on its context. Several tracks will be explored: adapting the language model, using a class model or studying the notion of analogy.

Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The postdoc will develop a new module whose function is to integrate new proper names in the language model.

Required skills

A PhD in NLP (Natural Language Processing), be familiar with the tools for automatic speech recognition, background in statistics and computer program skills (C and Perl).

Post-doc duration

12 months, start during 2014 (these is some flexibility)

Localization and contacts

Loria laboratory, Speech team, Nancy, France

Irina.illina@loria.frdominique.fohr@loria.fr

Candidates should email a letter of application, a detailed CV with a list of publications and diploma



Top

6-35(2014-10-02) PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

http://www.eurecom.fr/en/content/multimedia-indexing

DESCRIPTION

This thesis is part of a collaborative project to analyze the multimedia information
that is published on the Internet about cultural festivals, either by professional
or by the public. These data are diverse (text, images, videos) and published on various
sources (twitter, blogs, forums, catalogs). The project aims at analyzing and structuring
this information in order to better understand the public and the cultural practices, and
to recombine them to build synthetic view of these collections. This thesis will focus
on the aspects of video analysis and multimodal fusion.

This thesis will tackle two problems:
- to develop techniques for the automatic analysis of video content, so that the collected
data may be categorized in predefined categories. This component will be used to structure
the collections and better understand their content and their evolution.
- to study mechanisms to recombine the multimedia content, and build synthetic views of the
collections. Several strategies will be studied, depending on whether these views are intended
for professional users or for the general public.

The research will focus in particular on mechanisms to automatically construct semantic classifiers,
and on fusion techniques for the results of these classifiers. The recombination aspects will
involve methods for the selection of important segments, followed by an assembly strategy according
to the expected objective. A specific attention will be paid to evaluation techniques that will
allow to measure the performance of different approaches.

APPLICATION

PhD applicants are expected to have a Master degree, with honors. This research
involves a strong knowledge in signal processing and machine learning, as well as
a good experience of programming in Matlab and C/C++.
English is mandatory, French is a plus, but not required.

Candidates are invited to submit a resume, transcripts of grades for at least the last two year,
3 references, and motivation for the position.
Applications should be sent to the Eurecom secretariat, secretariat@eurecom.fr, with the subject
MM_BM_PhD_REEDIT_Sept2014

EURECOM

EURECOM is a leading teaching and research institution in the fields of information and communication
technologies (ICT). EURECOM is organized in a consortium form combining 7 European universities
and 9 international industrial partners, with the Institut Mines-Telecom as a founding partner.
Our 3 fundamental missions are:
- Research: focused on Networking and Security, Multimedia Communications and Mobile Communications.
- High level education: with graduate and postgraduate courses in communication systems, plus
three Master of Sciences programs entirely dedicated to foreign students.
- Doctoral program: in cooperation with several doctoral schools, supported by our research collaborations
with various partners, both industrial and academic, and funded by various sources, including national
and european programs.

Top

6-36(2014-10-05) Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis, Greece

Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis

As part of its commitment to continuously reinforce its excellence and strengthen its capacities in language technologies, the Institute of Language and Speech Processing (ILSP – http://www.ilsp.gr/en) of the 'Athena' Research and Innovation Center (http://www.athena-innovation.gr/en.html) announces a position for experience researchers in the area of:

Expressive and/or Multimodal Text-to-Speech Synthesis

The position is opened in the context of the LangTERRA project (www.langterra.eu) which is co-funded by the Seventh Framework Programme of the European Union (Grant Agreement No. 285924 FP7-REGPOT-2011-1). LangTERRA represents ILSP’s strong commitment to continuously reinforcing its excellence and strengthening its capacities and potential to excel at a European level in language technologies and related fields.

The candidates are expected to have a strong background and experience in one or more of the indicative topics in the list below:

n  Speech synthesis, analysis and feature extraction;

n  Capturing, analysing, modelling and synthesizing natural affective communication patterns with emphasis on reproducing them in the context of speech synthesis;

n  Applications that involve handling real affective speech, such as dialog systems

The selected experienced researchers will strengthen ILSP's research capacity by working as part of the respective ILSP team and bringing additional experience and know-how in this field. They are also expected to engage into networking with prominent research organizations throughout Europe, developing new collaborations and initiating proposals for funded research.

The required qualifications are:

n  a PhD in a field closely related to the position;

n  at least four years of full-time equivalent recent research experience with in-depth involvement in R&D projects;

n  a strong academic profile with high quality publications in international conferences and journals; and

n  a full professional proficiency in English.

The recruited experienced researchers will be offered a contract with ILSP. The duration of the contract will be 6 months, and could be extended depending on the availability of the necessary resources. The indicative monthly rate for the position will be 4.000 Euros depending on the status, qualifications and experience of the selected candidate.

The submitted applications must include:

n  A cover letter describing the special qualities of the applicant and her/his reasons for applying to the position;

n  Full curriculum vitae (CV), including a list of publications;

n  An electronic copy of the PhD degree; and

n  At least two recommendation letters.

The applicants should be available for a Skype interview. ILSP may come in contact with the persons providing the recommendation letters. Upon request, the applicant should be able to provide also copies of relevant diplomas and transcripts of academic records. The diplomas should be in English or Greek, else a formal translation in one of these languages should be provided.

The closing date for applications is October 24th, 2014 and will remain open until filled.

 

Submission:

The applications must be submitted electronically through email to both addresses below.

To: spy@ilsp.gr; vpana@ilsp.gr

The subject of the email should indicate: 'Application for LangTERRA – Expressive and/or Multimodal Text-to-Speech Synthesis'

 

Enquiries:

All enquiries related to the positions should be addressed to:

Dr. Spyros RAPTIS,
Email: spy@ilsp.gr,
Tel. +30 210 6875 -407, -300

 

More information is available at: http://www.ilsp.gr/el/news/75-jobs2/261-text2speech

 

Top

6-37(2014-10-06) Two positions at M*Modal, Pittsburg, PA, USA

Speech Recognition Researcher

Location:  Pittsburgh, PA

Available: Immediately

 

About M*Modal:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate should have a working knowledge of both Speech Recognition and Computer Algorithms and be able to learn about new technologies from original publications. 
As a member of our Speech R&D team composed of Software Engineers and Speech Recognition Researchers, you will help define the algorithms and architecture used in our next generation of products.

 

Essential Functions:

The incumbent will be expected to use his/her programming skills to work with our team on research, technology development, and applications in speech recognition.

 

We offer work with:

  • Systems that make a real difference in the lives of patients and doctors

  • Language models, Acoustic models, Front-ends, decoders, etc, based on our own state-of-the-art modular recognition toolkit

  • Training data per speaker varying from seconds to hundreds of hours – both transcribed and unsupervised.

 

Qualifications:

  • MS or PhD in Electrical Engineering, Computer Science or related field

  • Top-level awareness of all aspects of large vocabulary speech recognition

  • In-depth understanding of several components of state-of-the-art speech recognition

  • C++, Java, Perl, or Python programming

  • Experience with both Linux and Windows

  • Experience participating in large software development projects.

  • Experience with large-scale LVCSR projects or evaluations is a definite plus

  • Enthusiasm to work directly on a real production system

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

 

 

Language Developer

Location:  Pittsburgh, PA

Available:  Immediately

 

 

 

About Us:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate would help to improve our speech recognition systems based on our own state-of-the-art recognizer toolkit. This involves processing text for building language models for different applications, experimenting with different types of language models, training locale-optimized acoustic models, improving pronunciation dictionaries, and testing, tuning and troubleshooting speech recognition systems. As we work with very large amounts of data, data processing is run across a Linux cluster.

 

Qualifications:

  • BS or MS degree in Computer Science or related field

  • Background in related specialty, such as linguistics, machine learning or statistics

  • Programming skills, such as Java or Python

  • Working knowledge of Linux and Windows environments

  • Working knowledge of regular expressions

  • Prior experience with automatic speech recognition systems is of course a plus

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

Top

6-38(2014-10-08) Informaticien TAL à l'INSERM (CépiDc) val de Marne France

 

Le CépiDc est situé à l’hôpital du Kremlin-Bicêtre (Val de Marne). Il a pour missions principales de produire les données nationales de mortalité par cause, de diffuser, d'assister les utilisateurs et de mener des recherches sur ces données.

Le CépiDc est centre collaborateur OMS pour la Famille des Classifications Internationales (FCI) en langue française.

Le Centre d'épidémiologie sur les causes médicales de décès (CépiDc) de l’Inserm recrute :

Un informaticien en traitement automatique du langage (TAL)

Description du poste

Contexte :

La production de la statistique des causes médicales de décès se fonde sur la réception de près de 550 000 certificats de décès par an, dont environ 6% sont transmis par voie électronique (via www.certdc.inserm.fr). Cette proportion devrait augmenter sensiblement dans un futur proche.

Les certificats papiers et électroniques ont le même format structuré, conforme au modèle préconisé par l'OMS. Bien que la structure du certificat incite les médecins à séparer des entités nosologiques (des maladies, états morbides ou traumatismes), le texte rédigé est relativement libre et nécessite dans la majorité des cas un traitement automatique de standardisation. Celui-ci vise à bien séparer les entités nosologiques, à reconstituer leur ordre de causalité et à corriger les fautes d'orthographe. Après standardisation, un code de la classification internationale des maladies (CIM) est attribué à chaque entité nosologique à l'aide d'un index (comptant environ 160 000 entrées aujourd'hui).

Alors que le texte des certificats papiers est manuellement saisi et standardisé par une entreprise externe au service, le texte des certificats électroniques fait uniquement l'objet d'application de règles syntaxiques simples, qui rendent nécessaire et conséquent un traitement manuel du texte avant l’exploitation par Iris.

Missions

Dans le cadre de la production de la base des causes médicales de décès, l'agent aura pour missions principales :

- le suivi de la qualité de la saisie des certificats de décès,

- l'automatisation du traitement du texte médical pour l’accélérer et améliorer sa qualité,

- la participation à l’alerte sanitaire.

Activités

- Assurer le suivi du marché externalisé de saisie des certificats de décès,

- Développer les règles de traitement automatique du texte médical avec les outils existant dans le service,

- Lister les modifications nécessaires non prises en charge par les règles de traitement automatique du langage proposées par les outils existants,

- Participer à une revue des méthodes existantes de traitement automatique du langage à mobiliser pour prendre en charge ces modifications,

- Mettre en oeuvre et tester différentes méthodes de traitement automatique du langage, maximisant la proportion de texte standardisé et minimisant la proportion d’erreurs provoquée par le traitement

- Mettre à jour la liste des expressions présentes dans l’index afin de minimiser sa taille, de faciliter sa maintenance et de pouvoir ainsi le transmettre à d’autres pays francophones.

Spécificité du poste

- Les données traitées par le CépiDc sont de nature médicale et strictement confidentielle.

Profil recherché

Connaissances :

- Des méthodes de traitement automatique du langage (TAL) : grammaires formelles, syntaxe formelle, analyse syntaxique automatique,

- Des langages de programmation (C, Perl, Python…) et de gestion de bases de données (SQL),

- Lecture de l'anglais scientifique.

Savoir-faire :

- Développement et adaptation de méthodes TAL à une nouvelle problématique,

- Evaluation des performances obtenues par les méthodes,

- Rédaction de documentation méthodologique (rapport, article),

- Gestion des relations avec un prestataire extérieur.

Aptitudes :

- Capacité de formalisation de problématique de traitement du texte,

- Capacité à travailler en équipe avec des acteurs variés (médecins, nosologistes, statisticiens, épidémiologistes),

- Rigueur,

- Esprit d'initiative.

Contrat proposé

Contrat à durée déterminée : temps plein de 12 mois renouvelable

Rémunération : entre 2 031 et 2 465 € bruts selon l’expérience et le niveau de formation par référence aux grilles de l’Inserm

Date de prise de fonction : 01/12/2014

Formation

BAC +3/5 en linguistique informatique, spécialité traitement automatique du langage (Licence, Master, école d’ingénieur…).

Expérience professionnelle souhaitée :

Débutant accepté

Pour postuler, merci d’envoyer CV et lettre de motivation à : Grégoire Rey

Directeur du CépiDc de l'Inserm

gregoire.rey@inserm.fr

Tel : 01 49 59 18 63

Top

6-39(2014-10-11) Disney Research - Open positions for postdoc candidates and internships for PhD students, Pittsburgh, PA, USA

Disney Research - Open positions for postdoc candidates and internships for PhD students

 

Disney Research, Pittsburgh, is announcing several positions for outstanding postdoctoral candidates and internships for PhD students in areas related to speech technology, multimodal conversational systems, interactive robotics, child-robot interaction, human motion modelling, tele-presence, and wireless computing. Candidates should have experience in building interactive systems and the ability to build robust demonstrations.

 

Positions are available immediately, with flexible starting dates before the beginning of 2015. A detailed description of the positions and about Disney Research more generally is given below. Interested candidates should send an email with an up-to-date CV and any questions to drpjobs-sp@disneyresearch.com. Please make sure to use subject line: DRP-SP-2014.

 

Postdoctoral positions:

Postdoctoral positions are for 2 years. Candidates should have an outstanding research record, have published in top-tier journals and international conferences, and have shown impact on the research in their field. Candidates must have excellent command of English and a strong collaborative and team-oriented attitude. Postdoctoral positions are for one or more of the following areas:

 

  • ·       Multimodal spoken dialogue systems
  • ·       Adult- and Child-Robot Interaction
  • ·       Sensor fusion and multimodal signal processing
  • ·       Embodied conversational agents and language-based character interaction

 

All candidates should have excellent programming skills in scripting languages and in one or more object-oriented programming language. Preferred candidates will also have strong applied machine learning skills, and experience in data collection and experiment design with human subjects.

 

Internships for PhD students:

A number of internships are available for international PhD students in one of the following areas. The positions are full-time for 4-6 months available immediately. Candidates should be enrolled in a PhD program in Computer Science, Electrical Engineering, or related discipline. Applicants must have at least one publication in a top-tier conference, have excellent written and oral communication skills, enthusiastic, self-motivated, and enjoy collaborative and team work.

We have opportunities for internships in a variety of fields including:

 

  • ·       Nonverbal signal analysis and synthesis for human-like animated characters
  • ·       Telepresence and Tele-Communication in human-humanoid interaction
  • ·       Speech recognition applications for children
  • ·       Multimodal and incremental dialogue systems
  • ·       Kinematics, biomechanics, human motion modelling, and animatronics

 

Disney Research Labs (Pittsburgh, Boston, LA, and Zurich) provide a research foundation for the many units within the Walt Disney Company. Including Walt Disney Feature Animation, Walt Disney Imagineering, Parks & Resorts, Walt Disney Studios Motion Pictures, Disney Interactive Media Group, ESPN, and Pixar Animation Studios.

 

Disney Research combines the best of academia and industry: we work on a broad range of commercially important challenges, we view publication as a principal mechanism for quality control, we encourage and help with the engagement with the global research community, and our research has applications that are experienced by millions of people.

 

Disney Research Pittsburgh is made of a group of a world-leading researchers working on a very wide range of interactive technologies. The lab is co-located with Carnegie Mellon University under the direction of Prof. Jessica Hodgins. Members of DRP are encouraged to interact with the established research community at CMU and with the business units in Los Angeles and Florida. As an active member of the research community we support and assist with publications at top venues.

 

Disney Research provides very competitive compensations, benefits, and relocation help.

 

The Walt Disney Company is an Affirmative Action / Equal Opportunity Employer and encourages applications from members of under-represented groups.

 

http://www.disneyresearch.com

Top

6-40(2014-10-11) 1 PhD/Postdoctoral position, Saarland University, Saarbruecken, Germany

1 PhD/Postdoctoral position in DFG-funded CRC “Information Density and Linguistic Encoding” (SFB 1102), Saarland University, Saarbruecken, Germany

Deadline for Applications:*Oct 31, 2014*

The DFG-funded CRC “Information Density and Linguistic Encoding” is pleased to invite applications for a PhD/post-doctoral position within the project 'B1: Information Density and Scientific Literacy in English', start date as soon as possible. If not filled until the Oct 31, later applications will be considered until the position is filled.

SFB1102, B1.A: Postdoctoral researcher or PhD student, computational linguist or computer science

A central methodological aspect of the project is to train and apply language models as well as data mining and other machine learning techniques to investigate the diachronic linguistic development of English scientific writing (from 17th century to present). The successful candidate will work on adapting, modifying and extending standard techniques from language modeling and machine learning to incorporate linguistically motivated and interpretable features.

Requirements: The successful candidate should have a PhD/Master's in Computer Science, Computational Linguistics, or related discipline with a strong background in language modeling and machine learning (especially data mining). Good programming skills and knowledge of linguistics are strong assets. Previous shared work with linguists is desirable. A good command of English is mandatory. Working knowledge of German is desirable.

The project is headed by Prof. Dr. Elke Teich, Dr. Noam Ordan and Dr. Hannah Kermes (http://fr46.uni-saarland.de/index.php?id=teich) and carried out in close collaboration with Prof. Dr. Dietrich Klakow:
http://www.lsv.uni-saarland.de/klakow.htm

For further information on the project, see
http://www.sfb1102.uni-saarland.de/b1.php

The appointments will be made on the German TV-L E13 scale (65% for PhD student, 100%
for postdoctoral researcher; see also
http://www.sfb1102.uni-saarland.de/jobs.php). Support for travel to conferences is also available. *Priority will be given to applications received by October 31, 2014*. Any inquiries concerning the post should be directed to the e-mail address below.
Complete applications  quoting “SFB1102, B1.A” in the subject line should include (1) a statement of research interests motivating why you are applying for this position, (2) a full CV with publications, (3) scans of transcripts and academic degree certificates, and (4) the names and e-mail addresses of three referees and should be e-mailed as a single PDF to:

Prof. Dr. Elke Teich
e-mail:
e.teich@mx.uni-saarland.de

Top

6-412014-10-13) Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences), Paris, FR

Job announcement: lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences)

* équivalent Lecturer (UK) / Assistant Professor (USA)

Paris Diderot University will open a lecturer position in English phonetics, phonology and morpho-phonology for September 2015, pending budgetary approval.

Candidates are expected to have a Ph.D. in English Linguistics with a specialization in phonetics or phonology. Candidates should either already hold a tenured lecturer position or apply for accreditation by the French Conseil National des Universités. Note that the deadline for the first step of application for CNU accreditation is October 23rd, 2014.

Candidates should have expertise in the areas of English phonetics and phonology, with research interests in the morphology-phonology or morphology-phonetics interface. Those whose record of research relates to one or more of the following areas are particularly encouraged to apply: second language acquisition, quantitative, statistical or computational methods in linguistic research, psycholinguistic experimentation, laboratory phonology, and/or sociolinguistics. A working knowledge of French is expected.

The successful candidate is expected to join the Centre de Linguistique Interlangue, de Lexicologie, de Linguistique Anglaise et de Corpus - Atelier de recherche sur la parole (CLILLAC-ARP) and the department of English :

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVR&page=LesactivitesdeCLILLACARP

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVU&page=ACCUEIL&g=m

 

Teaching responsibilities will include undergraduate courses in phonetics, phonetic variation and intonation and graduate courses in phonetics and in the areas of the appointee's specialization. Other duties include supervision of graduate students, involvement in curricular development and in the administration of the department.

 

This position is a permanent one with a civil servant status. Salary will be in accordance with the French state regulated public service salary scale.

 

Any potential candidate should bear in mind that the deadline for registration for the national CNU “qualification” is October 23rd, 2014 on the Galaxie website of the French 'Ministère de l'Enseignement Supérieur':

https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/cand_qualification.htm

The position, if opened, will start on September 1st 2015 and the websites for applications (French Ministry of Higher Education and Université Paris Diderot) will be open in spring 2015.

 

Web Address for Applications: http://www.univ-paris-diderot.fr

Contact Information:

Prof. Agnès Celle

agnes.celle@univ-paris-diderot.fr

+33 1 57 27 58 67

Top

6-42(2014-10-15) Post-doctoral position (12 months) GIPSA-lab, Grenoble, France

Post-doctoral position (12 months)  GIPSA-lab, Grenoble, France

Incremental text-tp-speech synthesis for people with communication disorders

 

Duration, location and staff

The position is open from January 2015 (until filled) for a duration of 12 months. The work will take place in

GIPSA-lab, Grenoble, France, in the context of the project SpeakRightNow.

Researchers involved: Thomas Hueber, Gérard Bailly, Laurent Girin and Mael Pouget (PhD student).

Context

SpeakRightNow project aims at developing an incremental Text-To-Speech system (iTTS) in order to

improve the user experience of people with communication disorders who use a TTS system in their daily life.

Contrary to a conventional TTS, an iTTS system aims at delivering the synthetic voice while the user is typing

(eventually with a delay of one word), and thus before the full sentence is available. By reducing the latency

between text input and speech output, iTTS should enhance the interactivity of communication. Besides, iTTS

could be chained with incremental speech recognition systems, in order to design highly responsive speech-tospeech

conversion system (for application in automatic translation, silent speech interface, real-time

enhancement of pathological voice, etc.).

The development of iTTS system is an emerging research field. Previous work mainly focused on the

online estimation of the target prosody from partial (and uncertain) syntactic structure [2], and the reactive

generation of the synthetic waveform (as in [3] for HMM-based speech synthesis). The goal of this post-doctoral

position is to propose original solutions to these questions. Depending on the his/her background, the recruited

researcher is expected to contribute on one or more of the following tasks:

1) Developing original approaches to address the problem of incremental prosody estimation using

machine learning techniques for predicting missing syntactic information and driving prosodic

models.

2) Implementing a prototype of an iTTS system on a mobile platform. The system will be adapted

from the HMM-based TTS system currently developed at GIPSA-lab for French language.

3) Evaluating the prototype in a clinical context in collaboration with the medical partners of the

SpeakRightNow project.

Keywords: assistive speech technology, incremental speech synthesis, prosody, machine learning, handicap.

Prerequisite: PhD degree in computer science, signal processing or machine learning. A background in HMMbased

speech synthesis and/or development on iOS/Android platform is a plus.

To apply: Applicants should email a CV along with a brief letter outlining their research background, a list of

two references and a copy of their two most important publications, to Thomas Hueber (thomas.hueber@gipsa-lab.

fr).

References:

[1] Baumann, T., Schlangen, D., “Evaluating prosodic processing for incremental speech synthesis,” in Proceedings of

Interspeech, Portland, USA, Sept. 2012.

[2] Astrinaki, M., d’Allessandro, N., Picart, B. Drugman, T., Dutoit, T., “Reactive and continuous control of HMM-based

speech synthesis,” in Proceedings of IEEE Workshop on Spoken Language Technology Miami, USA, Dec. 2012.

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA