ISCA - International Speech
Communication Association


ISCApad Archive  »  2016  »  ISCApad #212  »  Jobs

ISCApad #212

Friday, February 05, 2016 by Chris Wellekens

6 Jobs
6-1(2015-10-06) POST-DOC OPENING IN STATISTICAL NATURAL LANGUAGE PROCESSING AT LIMSI-CNRS, FRANCE

POST-DOC OPENING
IN STATISTICAL NATURAL LANGUAGE PROCESSING
AT LIMSI-CNRS, FRANCE
****************************************************************

The 'Spoken Language Processing' team at LIMSI-CNRS, Orsay (25 km south of
Paris) is seeking qualified postdoctoral researchers in the field of Statistical
Natural Language Processing (see https://www.limsi.fr/en/research/tlp/).

* Description of the work

This position is related to a collaborative project aimed at developping on an
experimental platform for online monitoring of social media and information
streams, with self-adaptive properties, in order to detect, collect, process,
categorize, and analyze multilingual news streams. The platform includes
advanced linguistic analysis, discourse analysis, extraction of entities and
terminology, topic detection, translation and the project includes studies on
unsupervised and cross-lingual adaptation.

In this context, the candidate is expected to develop innovative methods for
performing unsupervised cross-domain and/or cross-lingual adaptation of
statistical NLP tools.

* Requirements and objectives

Candidates are expected to hold an Engineering BSc/Msc degree and a PhD in
Computer Science. Knowledge of statistical  or example-based approaches for
Speech or Natural Language Processing is required; the candidate is also
expected to have strong programmation skills, to be familiar with statistical
machine learning techniques and to have a good publication record in the
field.

Salary and other conditions of employments will follow CNRS standard rules
for non-permanent researchers, according to the experience of the candidate.

* Contacts : Francois Yvon - francois.yvon at limsi.fr

Interested candidates should send a short cover letter stating motivation and
interests, along with their CV (in .pdf format, only) and names and addresses of
two references, to the email address given above as soon as possible and by nov
1st, 2015 at the latest. The contract is expected to start Jan 1st, 2016.

Informal questions regarding this position should be directed to the same address.


--

Back  Top

6-2(2015-10-16) CDD à l’Institut français de l’Éducation (IFÉ - ENS de Lyon)

Offre de CDD 6 mois :

Sélection d’un système automatique de phonétisation de texte, adaptation au

contexte de l’apprentissage de la lecture, puis intégration au sein de la plate-forme

web du projet CADOE (Calcul de l’Autonomie de Déchiffrage Offerte aux Élèves).

Contexte

L’Institut français de l’Éducation (IFÉ - ENS de Lyon) mène une recherche d’envergure

nationale sur l’apprentissage de la lecture et de l’écriture au cours préparatoire (projet

LireEcrireCP). Cette recherche réunit 60 enseignants chercheurs et doctorants répartis sur

le territoire national. 131 classes ont été observées et de nombreuses données ont été

recueillies et analysées. Il en ressort notamment que les textes dont moins de 31% du

contenu est directement déchiffrable par les élèves pénalisent leurs apprentissages, et que

ceux dont cette proportion dépasse 55% favorisent les apprentissages des élèves les plus

faibles en « code » (c’est-à-dire sur les correspondances entre les lettres et les sons) à

l’entrée du cours préparatoire. L’analyse nécessaire pour connaitre cette part directement

déchiffrable est complexe et ne peut être réalisée au quotidien par les enseignants, mêmes

chevronnés. Ils ne disposent donc pas d’une information pourtant cruciale au choix des

textes à utiliser pour enseigner la lecture.

Le projet CADOÉ a pour but de mettre en place une plate-forme qui permettra aux

enseignants d’accéder à cette part de texte directement déchiffrable par les élèves. Pour

cela, ils devront renseigner la progression de leur enseignement des correspondances

entre les lettres et les sons (l’étude du « code »), indiquer quels mots ont été appris en

classe, et déposer les textes « candidats » pour être utilisés comme supports de lecture.

Ces textes seront automatiquement analysés et segmentés en unités graphiques. La

confrontation du « code » enseigné et des mots appris en classe avec le résultat de la

décomposition permettra de calculer, et de fournir en retour à l’utilisateur, la mesure de la

part de texte directement déchiffrable par les élèves, autrement dit le niveau d’autonomie

de déchiffrage des élèves sur le texte soumis.

Compétences attendues

Nous recherchons des candidats ayant, soit un profil informatique avec une spécialisation

sur le traitement automatique des langues (TAL) ou le traitement automatique de la parole,

soit une formation en linguistique avec spécialisation sur l’ingénierie linguistique et

l’informatique.

La personne recrutée étudiera des outils de phonétisation existants, et en sélectionnera un.

Elle devra ensuite le configurer ou l’adapter pour que ses sorties soient conformes à la

décomposition attendue proposée par Riou (2015), et assurera les tests. Elle portera le

projet et sera responsable de son avancée. Elle travaillera en coordination avec l’équipe de

développement Web sur les formats d’échanges entre la plate-forme et l’outil de

phonétisation. Elle travaillera en étroite collaboration avec J. Riou, chargé d’étude à l’IFÉ,

responsable scientifique du projet qui validera la configuration de l’outil de phonétisation et

coordonnera les tests utilisateur de l’environnement développé, ainsi qu’avec P. Daubias,

Ingénieur de Recherche en Informatique, responsable techniques. L’implication technique

dans la réalisation de la plate-forme web CADOÉ pourra varier en fonction du profil et des

compétences du(de la) candidat(e) retenu(e). L’intégration à l’équipe CADOÉ permettra.

Le processus de développement sera itératif (cycle en spirale) en affinant les spécifications

au vue des maquettes successives réalisées.

Aspects techniques

Le développement se fera pour une cible Linux et doit être performant. Le premier outil

étudié (lia_phon) est écrit en C, mais son adaptation ne requiert pas nécessairement une

connaissance approfondie de ce langage. D’autres outils de phonétisation sont

envisageables (IrisaPhon par exemple), et les technologies ne sont pas figées.

Aspects administratifs

Lieu : Institut français de l'Éducation - École Normale Supérieure de Lyon

Bâtiment D6

19 allée de Fontenay, 69007 Lyon

Métro B : Debourg

Rémunération en fonction du niveau et selon grille : de 1700 € à 2500 € brut mensuel.

Début de contrat : dès que possible

Merci d’envoyer votre candidature (CV + lettre de motivation) à : lire.ecrire@ens-lyon.fr

Un premier examen des dossiers reçus aura lieu début novembre 2015

Durée du contrat : 6 mois

Back  Top

6-3(2015-12-02) Master2 position at Multispeech Team, LORIA (Nancy, France)

Master2 position at Multispeech Team, LORIA (Nancy, France)

Automatic speech recognition: contextualisation of the language model based on neural networks by dynamic adjustment

Framework of ANR project ContNomina

The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.

In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.

As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.

Subject

Current systems for automatic speech recognition are based on statistical approaches. They require three components: an acoustic model, a lexicon and a language model. This stage will focus on the language model. The language model of our recognition system is based on a neural network learned from a large corpus of text. The problem is to re-estimate the language model parameters for a new proper name depending on its context and a small amount of adaptation data. Several tracks can be explored: adapting the language model, using a class model or studying the notion of analogy.

Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The student will develop a new module whose function is to integrate new proper names in the language model.

Required skills

Background in statistics and object-oriented programming.

Localization and contacts

Loria laboratory, Multispeech team, Nancy, France

Irina.illina@loria.frdominique.fohr@loria.fr

Candidates should email a detailed CV and diploma

References

[1] J. Gao, X. He, L. Deng Deep Learning for Web Search and Natural Language Processing , Microsoft slides, 2015

[2] X. Liu, Y. Wang, X. Chen, M. J. F. Gales, and P. C. Woodland. Efficient lattice rescoring using recurrent neural network langage models, in Proc. ICASSP, 2014, pp. 4941?4945.

[3] M. Sundermeyer, H. Ney, and R. Schlüter. From Feedforward to Recurrent LSTM Neural Networks for Language Modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, volume 23, number 3, pages 517-529, March 2015.

Back  Top

6-4(2015-10-22) Scientific collaborator for the multimodal project ADNVIDEO, Marseille, France

Application deadline: 12/31/2015
Starting: as soon as possible.

Description:
The ADNVIDEO project (http://amidex.kalysee.com/), funded in the
framework of A*MIDEX (http://amidex.univ-amu.fr/en/home), aims at
extending multimodal analysis models. It focuses on jointly processing
audio, speech transcripts, images, scenes, text overlays and user
feedback. Using as starting point the corpus, annotations and
approaches developed during the REPERE challenge
(http://defi-repere.fr), this project aims at going beyond indexing at
single modalities by incorporating information retrieval methods, not
only from broadcast television shows, but more generally on video
documents requiring multimodal scene analysis. The novelty here is to
combine and correlate information from different sources to enhance
the description of the content. The application for this project
relates to the issue of recommendation applied to videos in the
context of Massive Open Online Courses where video content can be
matched to student needs.

Objectives:
The candidate will participate in the development of a prototype for
video recommendation:

- Integration of existing multimodal high-level descriptors in prototype
- Generation of textual descriptors from videos (such as automatic
image captioning, scene title generation, etc)
- Implementation of deep learning methods for video analysis

The allocation of the tasks can be adjusted depending on the wishes
and skills of the candidate.

Skills:
For this project, we are looking for one candidate with a PhD degree
in the areas of machine learning, artificial vision, natural language
processing, or information retrieval:
- Strong programming skills (C++, Java, Python...).
- Desire to produce functioning end-to-end systems, life-scale live demos
- Scientific rigor
- Imagination
- Top notch publications
- Excellent communication skills
- Enjoy teamwork
Candidates must presently work outside of France.

Location:
The work will be conducted in the University of Aix-Marseille at the
Laboratoire des Science de l'information et des système (LSIS
http://www.lsis.org), within the ADNVidéo project, supported by
funding from a AMIDEX foundation in collaboration with Kalyzee
(http://www.kalyzee.com/). Both LSIS and Kalysee are located in the
historical and sunny city of Marseille, in south of France
(http://www.marseille.fr/sitevdm/versions-etrangeres/english--discover-marseille).

Contact: sebastien.fournier@lsis.org
Duration: 6 month

Candidates should email a letter of application, a detailed CV
including a complete list of publications, and source code showcasing
programming skills.

Back  Top

6-5(2015-11-05) Ingénieur pour le projet LINKMEDIA de l'IRISA, Rennes, France

L?e?quipe LINKMEDIA (http://www-linkmedia.irisa.fr) de l?IRISA travaille au
de?veloppement de technologies permettant la description et l?acce?s au contenus
multime?dias par analyse de ces derniers : vision par ordinateur, traitement de la parole
et du langage, traitement des contenus audio, fouille de donne?es. Nos travaux s?appuient
sur une plateforme d'indexation qui fournit, en plus d'une infrastructure matérielle, une
offre logiciel sous la forme de web services.

Pour de?velopper et promouvoir les services propose?s sur la plateforme d?indexation
multime?dia de l?IRISA, nous recrutons un inge?nieur spe?cialiste du traitement des
donne?es multime?dias. Les missions qui lui seront confie?s sont :
? inte?gration a? la plateforme de modules existants
? de?veloppement de nouveaux modules mettant en ?uvre des techniques a? l?e?tat de l?art
? mise en cohe?rence de l?ensemble des modules et documentation
? re?alisation de de?monstrations d?applications multime?dias pour l?e?ducation et le
transfert industriel
? participation a? des campagnes d?e?valuation internationale

L?inge?nieur sera inte?gre? dans l?e?quipe de recherche LINKMEDIA et travaillera en
e?troite collaboration avec les chercheurs et leurs partenaires industriels sur des
projets de R&D.

Le candidat, de niveau Bac+5 ou Bac+8, devra posse?der un inte?re?t marque? pour les
technologies multime?dias et les technologies du web. Il devra e?galement justifier d?une
expe?rience significative en programmation (langages C/C++, perl, python), par exemple au
travers de projets et de stages pour les jeunes diplo?me?s. Une expe?rience dans la
conduite de projets informatiques d?envergure sera appre?cie?e. E?tant donne? le contexte
international de travail, une bonne connaissance de l?anglais est indispensable.

Pour candidater, merci d?adresser un CV accompagne? d?une lettre de motivation. Pour plus
de pre?cisions sur le poste, nous contacter.

Employeur : Centre National de la Recherche Scientifique
Lieu d?exercice : IRISA, Rennes
Contrat : CDD de 12 a? 16 mois, de?s que possible
Re?mune?ration : de 24 k? a? 35 k? annuels bruts selon diplo?me et expe?rience
Contact : Guillaume Gravier, guillaume.gravier@irisa.fr

Back  Top

6-6(2015-11-13) Ph.D. at Limsi, Orsay, France

LIMSI (http://www.limsi.fr) seeks qualified candidates for one fully funded PhD position in the field of automatic speaker recognition. The research will be conducted in the framework of the ANR-funded project ODESSA (Online Diarization Enhanced by recent Speaker identification and Structured prediction Approaches) in partnership with EURECOM (France) and IDIAP (Switzerland).

Master students are welcome to apply for a preliminary internship (starting no later than April 2016) that may lead to this PhD position.

Broadly, the goal of an automatic speaker recognition system is to authenticate or to identify a person through speech signal. Speaker diarization is an unsupervised process that aims at identifying each speaker within an audio stream and determining the intervals during which each speaker is active.

The overall goal of the position is to advance the state-of-the-art in speaker recognition and diarization.
Specifically, the research will explore the use of structured prediction techniques for speaker diarization.

Conversations between several speakers are usually highly structured and speech turns of a given person are not uniformly distributed over time. Hence, knowing that someone is speaking at a particular time t tells us a lot about the probability that (s)he is also going to speak a few seconds later. However, state-of-the-art approaches seldom takes this intrinsic structure into account.
The goal of this task is to demonstrate that structured prediction techniques (such as graphical models or SVMstruct) can be applied to speaker diarization.

The proposed research is a collaboration between EURECOM, IDIAP and LIMSI.
The research will rely on previous knowledge and softwares developed at LIMSI. Reproducible research is a cornerstone of the project. Hence a strong involvement in data collection and open source libraries are expected.

The ideal candidate should hold a Master degree in computer science, electrical engineering or related fields. She or he should have a background in statistics or applied mathematics, optimization, linear algebra and signal processing. The applicant should also have strong programming skills and be familiar with Python, various scripting languages and with the Linux environment. Knowledge in speech processing and machine learning is an asset.

Starting date is as early as possible and no later than October 2016.

LIMSI is a CNRS laboratory with 250 people and 120 permanent members. The Spoken Language Processing group involved in the project is composed of 41 people including 17 permanent members. The group is internationally recognized for its work on spoken language processing, and in particular for its development on automatic speech recognition. The research carried out in the Spoken Language Processing Group aims at understanding the speech communication processes and developing models for use in automatic speech processing. This research area is inherently multidisciplinary, Different topics are addressed among them speech recognition, speaker recognition, corpus linguistics, error analysis, spoken language dialogue, question-answering in spoken data, multimodal indexation of audio and video documents, and machine translation of both spoken and written language.

Contact : Hervé Bredin (bredin@limsi.fr) and Claude Barras (barras@limsi.fr)

Back  Top

6-7(2015-11-15) 1 (W/M) researcher positions at IRCAM, Paris France

 

Position: 1 (W/M) researcher positions at IRCAM

Starting: January 4th, 2016

Duration: 18 months

Deadline for application: December, 1st, 2015

 

Description of the project:

The goal of the ABC-DJ project (European H2020 ICT-19 project) is

to develop advanced Audio Branding (recommending music for a

trade-mark) technologies. For this, ABC-DJ will rely on Music

Content and Semantic Analysis.

Within this project, IRCAM will develop

new music content analysis algorithms (auto-tagging into

genre, emotions, instrumentation, estimation of tonality

and tempo)

new tools for advanced DJ-ing (audio quality

measurement, segmentation into vocal parts, full

hierarchical structure analysis, intelligent track summary,

audio source separation).

 

Position description 201511ABCRES:

For this project IRCAM is looking for a researcher for the

development of the technologies of music content analysis and

advanced DJ-ing.

Required profile:

High Skill in audio signal processing (spectral analysis, audiofeature

extraction, parameter estimation) (the candidate

should preferably hold a PHD in this field)

High skill in machine learning (the candidate should

preferably hold a PHD in this field)

High-skill in Matlab/Python programming, skills in C/C++

programming

Good knowledge of Linux, Windows, Mac-OS environments

High productivity, methodical works, excellent programming

style.

The hired Researchers will also collaborate with the development

team and participate in the project activities (evaluation of

technologies, meetings, specifications, reports).

 

Introduction to IRCAM:

IRCAM is a leading non-profit organization associated to Centre

Pompidou, dedicated to music production, R&D and education in

sound and music technologies. It hosts composers, researchers and

students from many countries cooperating in contemporary music

production, scientific and applied research. The main topics

addressed in its R&D department include acoustics, audio signal

processing, computer music, interaction technologies and

musicology. Ircam is located in the centre of Paris near the Centre

Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

Salary:

According to background and experience

 

Applications:

Please send an application letter with the reference 201511ABCRES

together with your resume and any suitable information addressing

the above issues preferably by email to: peeters at ircam dot fr with

cc to vinet at ircam dot fr, roebel at Ircam dot fr.

 

Back  Top

6-8(2015-11-13) Intership at Loria, Vandoeuvre-lès-Nancy, France
Speech intelligibility: how to determine the degree of nuisance
 
General information
Supervisors
Irina Illina, LORIA, Campus Scientifique - BP 239, 54506 Vandoeuvre-lès-Nancy, illina@loria.fr
Patrick Chevret, INRS, 1 rue du Morvan, 54519 Vandoeuvre-lès-Nancy, patrick.chevret@inrs.fr
 
Motivations
The intelligibility of speech means the ability of a conversation to be understood by a listener located nearby. The level of speech intelligibility depends on several criteria: the level of ambient noise, the possible absorption of part of the sound spectrum, acoustic distortion, echoes, etc. The intelligibility of speech is used to assess the performance of telecommunication systems or absorption in rooms.
 
The speech intelligibility can be evaluated:
- subjectively: listeners hear several words or sentences and answer different questions (the transcription of sounds, the percentage of perceived consonants, etc.). The scores are the value of intelligibility ;
- objectively, without involving listeners, using acoustic measures: the index of speech intelligibility (speech transmission index, STI) and the interference level with speech.
 
Subjective measures are dependent of listeners and require a large number of listeners. This is difficult to achieve, especially when there are different types of environments. Moreover, it is necessary to evaluate this measure for each listener. Objective measures have the advantage of being automatically quantifiable and to be precise. However, which objective measures can measure the nuisance of the environment on the intelligibility of speech and people's health remains an open problem. For example, the STI index consists of measuring the energy modulation. But the energy modulation can be produced by the machines, yet it does not match the speech.
 
Subject
In this internship, we focus on the study of various objective measures of speech intelligibility. The goal is to find reliable measures to evaluate the level of nuisance of environment to speech understanding, to long-term mental health of people and to productivity. Some possible solutions consist to correlate the word confidence measure, noise measurement confidence and subjective measures of speech intelligibility. To develop these measures, the automatic speech recognition system will be used.
 
This internship will be performed through collaboration between our Multispeech team of LORIA and INRS (National Institute of Research and Safety). INRS works on professional risk identification, analysis of their impact on health and prevention. INRS has a rich corpus of recordings and subjective measures of speech intelligibility. This corpus will be used in the context of this internship. Our Multispeech team has great expertise in signal processing and has developed several methodologies for noise estimation. The Multispeech team developed the complete system of automatic speech recognition.
 
Required skills
Background in statistics and object-oriented programming.
Back  Top

6-9(2015-11-15) 2 sujets de stage pour 2016 au LIA, Avignon portant sur les interactions vocales


 Voici 2 sujets de stage pour 2016 au LIA, Avignon portant sur les interactions vocales
Homme-Machine . Merci de faire circuler ces offres aupres des etudiants potentiellement
concernés (Masters Informatique, Linguistique, Sciences Cognitives, IA, Traitement des
données, Mathématiques...).

========================================================================
Modèles connexionnistes pour la génération automatique de texte dans le cadre de
l'interaction vocale

Encadrants : Dr Stéphane Huet, Dr Bassam Jabaian, Prof. Fabrice Lefèvre

Descriptif du stage :
Les systèmes d'interaction vocales utilisés dans des applications comme la réservation de
billets d'avion ou d'hôtels, ou bien encore pour le dialogue avec un robot, font
intervenir différents composants. Parmi ceux-ci figure le module de génération de texte
qui produit la réponse du système en langage naturelle à partir d'une représentation
sémantique interne créée par le gestionnaire de dialogue.

Les systèmes de dialogue actuels intègrent des modules de génération basés sur des règles
ou patrons lexicaux définis manuellement, par ex :

confirm(type=$U, food=$W,drinks=dontcare)
? Let me confirm, you are looking for a $U serving $W food and any kind of drinks right ?

Ces modules gagneraient à se baser sur des méthodes d'apprentissage automatique afin de
faciliter la portabilité des systèmes de dialogue vers de nouvelles tâches et améliorer
la diversité des réponses générées. Parmi ces méthodes figurent les réseaux de neurones
qui ont vu un regain d'intérêt depuis l'introduction de la notion de « deep learning ».
De tels réseaux de neurones ont déjà été employés par le laboratoire de recherche de
Google pour une tâche de génération de description d'images
(http://googleresearch.blogspot.fr/2014/11/a-picture-is-worth-thousand-coherent.html)
proche de celle qui nous intéresse ici. Ainsi l'objectif de ce stage est d'étudier
l'utilisation de ces modèles précisément dans le cadre de l'interaction vocale.

Si un intérêt pour l'apprentissage automatique et le traitement de la langue naturelle
est souhaitable, il est attendu surtout du stagiaire de bonnes capacités en développement
logiciel. Le stagiaire travaillera dans le contexte d'une plateforme d'interaction vocale
complète et pourra élargir son champ d'investigation aux autres composants. Plusieurs
pistes pour une prolongation en thèse sont ouvertes.


Durée du stage : 6 mois
Rémunération : Environ 529? / mois
Thématique associée au stage : Systèmes de dialogue homme-machine, génération du langage
naturelle, apprentissage automatique...

========================================================================
Humour et systèmes d'interaction vocale

Encadrants : Dr Bassam Jabaian, Dr Stéphane Huet, Prof. Fabrice Lefèvre

Descriptif du stage : Automatisation de productions humoristiques.

Des travaux précédents en linguistique ont permis d?établir les bases d'une taxonomie des
mécanismes d'humour interactionnels. En partant de cette base la question que nous
souhaitons aborder dans ce travail est : peut-on automatiser la production de traits
humoristiques dans un dialogue homme-machine finalisé et si oui quel est l'impact sur les
performances du dialogue ?

Bien sur il ne peut s'agir de reproduire exactement les capacités générales d'un humain,
qui sont très complexes à décrire et certainement impossible à automatiser, mais plutôt
d'extraire certains mécanismes suffisamment réguliers pour les formaliser et les faire
exécuter en situation de dialogue. Cela devrait permettre de produire un effet décalé,
donnant ainsi une dimension de sympathie au système d'interaction dans la perception de
l'utilisateur.

D'un point de vue pragmatique plusieurs types de production (plus ou moins indépendamment
du mécanisme humoristique utilisé) sont déjà envisagés, réactionnel ou générationnel :
1. Dans le premier cas on détecte une opportunité (présence de connecteurs), puis on
réagit (génération de disjoncteurs). C?est le cas de l?humour basé sur les mots
polysémiques, i.e. on repère un mot que le système fait semblant de comprendre dans son
sens « gênant » ou inadapté.
2. Dans le second cas, on propose un trait d'humour ex-nihilo ou après détection d'une
nécessité de facilitation, par exemple lors de l?apparition d?un désalignement
(l?évolution normale du dialogue est gênée par une ou plusieurs incompréhensions). Il
s?agit alors de calembours (« puns »), mots d?esprits ou histoires drôles (« jokes »). On
pourra alors avoir recours à une base prédéfinie de blagues et les sélectionner selon le
contexte de dialogue (au moyen de technique de recherche d'information classique).

L'objectif est de pouvoir implémenter les solutions retenues sur les robots NAO
d'Aldebaran, disponibles au laboratoire, dans le contexte d'une tâche simple (jeux).
Au-delà de l'intérêt pour la thématique de l'intelligence artificielle sous-jacente au
sujet il est principalement attendu du stagiaire de très bonnes compétences en
développement logiciel. Ce stage ouvre sur plusieurs possibilités de poursuite en thèse
dans le domaine de la communication Homme-Machine pour l'intelligence artificielle.

Durée du stage : 6 mois
Rémunération : Environ 529? / mois
Thématique associée au stage : Systèmes de dialogue homme-machine, compréhension de la
parole, gestion du dialogue, apprentissage automatique.
========================================================================

Les étudiants intéressés sont invités à envoyer un email à
fabrice.lefevreAtuniv-avignon.fr, bassam.jabaianAtuniv-avignon.fr et
stephane.huetAtuniv-avignon.fr en indiquant le sujet visé (ou les 2) et en joignant un
dossier d'évaluation (avec au moins un CV, un relevé de notes des 2 dernières années et
une lettre de motivation).

Une première sélection aura lieu le 24/11/15.

Bien cordialement,
- Fabrice Lefevre

Back  Top

6-10(2015-11-18) PhD and Postdoctoral Opportunities in Multilingual Speech Recognition at Idiap, Martigny, Switzerland

PhD and Postdoctoral Opportunities in Multilingual Speech Recognition

In the context of a new EU funded collaborative project, Idiap Research Institute has PhD
and postdoctoral opportunities in multilingual speech recognition.

For more details and to apply, see the respective entries on our recruitment page:
 http://www.idiap.ch/education-and-jobs

Back  Top

6-11(2015-11-20) Ingénieur de recherche au LPL, Aix-en-Provence, France

Le Laboratoire Parole et Langage à Aix-en-Provence propose un poste d'ingénieur de recherche à la mutation pour la campagne CNRS NOEMi d'hiver.

Le détail du poste est ci-joint.

 

Résumé du profil :

Le/la chef de projet assurera le traitement du signal (signaux acoustiques, kinématiques, physiologiques, électro-encéphalographiques, signaux vidéo), analyses statistiques des données multi-modales ; le développement de programmes pour le pré-traitement et le traitement du signal (filtrage, synthèse/resynthèse, transformations temps/fréquence, édition, extraction de paramètres, segmentation / annotation) et l'analyse statistique (ex. : modèles linéaires à effets mixtes, représentations graphiques, etc.).

 

Pour plus d'information, n'hésitez pas à contacter la Direction du LPL (noel.nguyen@lpl-aix.fr) ou le coordinateur du Centre d?Expérimentation sur la Parole (alain.ghio@lpl-aix.fr)

 

Back  Top

6-12(2015-11-20) Postdoctoral position in speech intelligibility at IRIT Toulouse, France

Title: Postdoctoral position in speech intelligibility

Application deadline: 1/31/2016

Description: The decreasing mortality of Head and Neck Cancers highlights the importance to reduce the impact on Quality of Life (QoL). But, the usual tools for assessing QoL are not relevant for measuring the impact of the treatment on the main functions involved by the sequelae. Validated tools for measuring the functional outcomes of carcinologic treatment are missing, in particular for speech disorders. Some assessments are available for voice disorders in laryngeal cancer but there are based on very poor tools for oral and pharyngeal cancers involving more the articulation of speech than voice.

In this context, the C2SI (Carcinologic Speech Severity Index) project proposes to develop a severity index of speech disorders describing the outcomes of therapeutic protocols completing the survival rates. There is a strong collaboration between linguists, phoneticians, speech therapists and computer science researchers, in particular those from the Toulouse Institute of Computer Science Research (IRIT), within the SAMoVA team (http://www.irit.fr/recherches/SAMOVA/).

Intelligibility of speech is the usual way to quantify the severity of neurologic speech disorders. But this measure is not valid in clinical practice because of several difficulties as the familiarity effect of this kind of speech and the poor inter-judge reproducibility. Moreover, the transcription intelligibility scores do not accurately reflect listener comprehension. Therefore, our hypothesis is that an automatic assessment technic can measure the impact of the speech disorders on the communication abilities giving a severity index of speech in patients treated for head and neck and particularly for oral and pharyngeal cancer.

The main objective is then to demonstrate that the C2SI, obtained by an automatic speech processing tool, produces equivalent or superior outcomes than a score of speech intelligibility obtained by human listeners, in terms of QoL foreseeing the speech handicap, after the treatment of oral and/or pharyngeal cancer.

The database is actually recorded at the Institut Universitaire du Cancer in Toulouse with CVC pseudo-words, readings, short sentences focusing on prosody and spontaneous descriptions of pictures.

Roadmap to develop an automatic system that will evaluate the intelligibility of impaired speech:

- Study existing SAMoVA technologies and evaluate them with the C2SI protocol,

- Find relevant features with the audio signal that support intelligibility,

- Merge those features to obtain the C2SI,

- Correlate it with the speech intelligibility scores obtained by human listeners,

- Study in which way the features support understandability as well.

Skills:

For this project, we are looking for one candidate with a PhD degree in the areas of machine learning, signal processing, and also with: programming skills, scientific rigour, creativity, good publication record, excellent communication skills, enjoying teamwork...

Salary and other conditions of employments will follow CNRS (French National Center for Scientific Research) standard rules for non-permanent researchers, according to the experience of the candidate.

Location: the work will be conducted in the SAMoVA team of the IRIT, Toulouse (France).

Contact: Jérôme Farinas jerome.farinas@irit.fr , Julie Mauclair julie.mauclair@irit.fr

Duration: 12 to 24 months

Candidates should email a letter of application, a detailed CV including a complete list of publications, and source code showcasing programming skills if available.

Back  Top

6-13(2015-12-02) Master2 position at Multispeech Team, LORIA (Nancy, France)

 

Master2 position at Multispeech Team, LORIA (Nancy, France)

Automatic speech recognition: contextualisation of the language model based on neural networks by dynamic adjustment

Framework of ANR project ContNomina

The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.

In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.

As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.

Subject

Current systems for automatic speech recognition are based on statistical approaches. They require three components: an acoustic model, a lexicon and a language model. This stage will focus on the language model. The language model of our recognition system is based on a neural network learned from a large corpus of text. The problem is to re-estimate the language model parameters for a new proper name depending on its context and a small amount of adaptation data. Several tracks can be explored: adapting the language model, using a class model or studying the notion of analogy.

Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The student will develop a new module whose function is to integrate new proper names in the language model.

Required skills

Background in statistics and object-oriented programming.

Localization and contacts

Loria laboratory, Multispeech team, Nancy, France

Irina.illina@loria.frdominique.fohr@loria.fr

Candidates should email a detailed CV and diploma

References

[1] J. Gao, X. He, L. Deng Deep Learning for Web Search and Natural Language Processing , Microsoft slides, 2015

[2] X. Liu, Y. Wang, X. Chen, M. J. F. Gales, and P. C. Woodland. Efficient lattice rescoring using recurrent neural network langage models, in Proc. ICASSP, 2014, pp. 4941?4945.

[3] M. Sundermeyer, H. Ney, and R. Schlüter. From Feedforward to Recurrent LSTM Neural Networks for Language Modeling. IEEE/ACM Transactions on Audio, Speech, and Language Processing, volume 23, number 3, pages 517-529, March 2015.

Back  Top

6-14(2015-12-03) ,PostDoc position in the field of automatic speaker recognition (ASR) at Idiap, Martigny, Switzerland

The Idiap Research Institute (http://www.idiap.ch  ) seeks qualified candidates for one
PostDoc position in the field of automatic speaker recognition (ASR) .

The research will be conducted in the framework of the SNSF funded project ODESSA (Online
Diarization Enhanced by recent Speaker identification and Structured prediction
Approaches) in partnership with LIMSI and EURECOM in France.

Broadly, the goal of an automatic speaker recognition system is to authenticate or to
identify a person through speech signal.
Speaker diarization is an unsupervised process that aims at identifying each speaker
within an audio stream and determining the intervals during which each speaker is active.

The overall goal of the position is to advance the state-of-the-art in speaker
recognition and diarization. Specifically, the research will:
* investigate i-vectors and deep neural network for ASR and their application to the
problem of speaker diarization,
* explore the use of domain adaptation.

The proposed research is a collaboration between LIMSI (Hervé Bredin, Claude Barras),
EURECOM (Nicholas Evans) and the Biometrics group (Dr. Sebastien
Marcel,http://www.idiap.ch/~marcel  ) at Idiap.
The research will rely on previous knowledge and softwares developed at Idiap, more
specifically Bob toolkit (http://idiap.github.io/bob/  ) and Spear
(https://pypi.python.org/pypi/bob.spear  ). In addition the use
of additional libraries for deep learning (Torch, Caffe or Theano) are considered.

Reproducible research is a cornerstone of the project. Hence a strong involvement in data
collection and open source libraries such as Bob and Spear are expected.

The ideal candidate should hold a PhD degree in computer science, electrical engineering
or related fields. She or he should have a background in statistics or applied
mathematics, optimization, linear algebra and signal processing. The applicant should
also have strong programming skills and be familiar with Python, C/C++ (MATLAB is not a
plus), various scripting languages and with the Linux environment. Knowledge in speech
processing and machine learning is an asset. Shortlisted candidate may undergo a series
of tests including technical reading and writing in English and programming (in Python
and/or C/C++).

Appointment for the position is for 3 years, provided successful progress and may be
renewed depending on funding opportunities. Salary on the first year starts at 80'000 CHF
(gross salary). Starting date is early 2016.

Apply online
here:http://www.idiap.ch/webapps/jobs/ors/applicant/position/index.php?PHP_APE_DR_9e581720b5ef40dc7af21c41bac4f4eb=%7B__TO%3D%27detail%27%3B__PK%3D%2710179%27%7D

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy
Back  Top

6-15(2015-12-12) Ph.D. Position in Speech Recognition at Saarland University, Germany

Ph.D. Position in Speech Recognition at Saarland University

 

The spoken Language Systems group from Saarland University in Germany anticipates the availability of a Ph.D. position in the area of speech recognition. This position is part of the Horizon 2020 project MALORCA, a research project on long term unsupervised adaptation of the acoustic and the language models of a speech recognition system. The research will be carried out together with a European consortium of high-profile research institutes and companies.

Requirements:

  • Degree in computer science, electrical engineering or a discipline with a related background.

  • Excellent programming skills C/C++, python and/or perl.

  • Experience with Linux and bash-scripting.

  • Very good math background.

  • Very good oral and written English communication skills.

  • Interest in speech recognition research.

 

Salary:

  • The position is fully funded with a salary in the range of 45,000 Euros to 55,000 Euros per year depending on the qualification and professional experience of the successful candidate.

  • The position is full time, for two years (with possibility of extension).

  • Starting date is April 1st 2016.

 

Research at Saarland University:

Saarland University is one of the leading European research sites in computational linguistics and offers an active, stimulating research environment. Close working relationships are maintained between the Departments of Computational Linguistics and Computer Science. Both are part of the Cluster of Excellence, which also includes the Max Planck Institutes for Informatics (MPI-INF) and Software Systems (MPI-SWS) and the German Research Center for Artificial Intelligence (DFKI).

 

 

 

 

 

Each application should include:

Curriculum Vitae including a list of relevant research experience in addition to a list of publications (if applies).

  • Transcript of records BSc/MSc.

  • Statement of interest (letter of motivation).

  • Names of two references.

  • Any other supporting information or documents.

Applications (documents in PDF format in a single file) should be sent no later than, Sunday, January 10th to: sekretariat@LSV.Uni-Saarland.De

 

Further inquiries regarding the project should be directed to:

Youssef.Oualil@LSV.Uni-Saarland.De

or

Dietrich.Klakow@LSV.Uni-Saarland.De

 

 

 

 

 

 

 

Back  Top

6-16(2015-12-12) PostDoc Position in Speech Recognition at Saarland University, Germany

PostDoc Position in Speech Recognition at Saarland University

 

The spoken Language Systems group from Saarland University in Germany anticipates the availability of a PostDoc position in the area of speech recognition. This position is part of the Horizon 2020 project MALORCA, a research project on long term unsupervised adaptation of the acoustic and the language models of a speech recognition system. The research will be carried out together with a European consortium of high-profile research institutes and companies.

Requirements:

  • Degree in computer science, electrical engineering or a discipline with a related background.

  • Excellent programming skills C/C++, python and/or perl.

  • Experience with Linux and bash-scripting.

  • Very good math background.

  • Very good oral and written English communication skills.

  • Interest in speech recognition research.

 

Salary:

  • The position is fully funded with a salary in the range of 45,000 Euros to 55,000 Euros per year depending on the qualification and professional experience of the successful candidate.

  • The position is full time, for two years.

  • Starting date is April 1st 2016.

 

Research at Saarland University:

Saarland University is one of the leading European research sites in computational linguistics and offers an active, stimulating research environment. Close working relationships are maintained between the Departments of Computational Linguistics and Computer Science. Both are part of the Cluster of Excellence, which also includes the Max Planck Institutes for Informatics (MPI-INF) and Software Systems (MPI-SWS) and the German Research Center for Artificial Intelligence (DFKI).

 

 

 

 

 

Each application should include:

Curriculum Vitae including a list of relevant research experience in addition to a list of publications (if applies).

  • Transcript of records BSc/MSc (Ph.D. if applies).

  • Statement of interest (letter of motivation).

  • Names of two references.

  • Any other supporting information or documents.

Applications (documents in PDF format in a single file) should be sent no later than, Sunday, January 10th to: sekretariat@LSV.Uni-Saarland.De

 

Further inquiries regarding the project should be directed to:

Youssef.Oualil@LSV.Uni-Saarland.De

or

Dietrich.Klakow@LSV.Uni-Saarland.De

 

 

 

 

 

 

 

 

Back  Top

6-17(2015-12-14) Ussher Assistant Professor in Irish Speech and Language Technology (IRL)

School of Linguistic, Speech and Communication Sciences

 

Ussher Assistant Professor in Irish Speech and Language Technology

 

The Ussher Assistant Professor in Irish Speech and Language Technology in the School of Linguistic, Speech and Communication Sciences will lead the development of the Irish Speech and Language Technology Research Centre (ITUT). This research will embed innovative speech and language technology resources in pedagogically sound language learning applications and in assistive technologies, enhancing teaching and learning of Irish nationally and globally. The appointee will have a strong background in speech-language technology, a specialisation in technology-assisted language learning, and a track record in linguistics research. Outreach and dissemination are key features of this post. The appointee will contribute to teaching and research supervision. A high level of competence in Irish language is desirable.

 

Appointment will be made at a maximum of the 8th point of the New Assistant Professor Merged Salary Scale.

 

Candidates wishing to discuss the post informally should contact:

Professor Elaine Uí Dhonnchadha, E-mail: uidhonne@tcd.ie

 

Applications will only be accepted through e-recruitment

 

Further information and application details can be found at: https://jobs.tcd.ie

 

Closing date for receipt of completed applications is: no later than 12 Noon GMT on Thursday 14th January 2016

Back  Top

6-18(2015-12-16) Stage à Avignon, France: Synchronisation automatique de sous-titrages pour le spectacle vivant

Sujet de stage Master

« Synchronisation automatique de sous-titrages pour le spectacle vivant »

Encadrant : Jean-François Bonastre (jean-francois.bonastre@univ-avignon.fr)

Ce stage a pour objet de réaliser une étude de faisabilité pour un dispositif de

synchronisation automatique de sous-titrages pour le spectacle vivant et, plus

particulièrement, pour le théâtre.

Les textes intégraux initiaux correspondant à la pièce jouée sont ramenés à des

sous-titres de longueur plus réduite, pour permettre un affichage et une lecture en

temps réel sur une tablette ou des lunettes de réalité virtuelle. Plusieurs versions

des sous-titres sont réalisées dans plusieurs langues. Ces sous-titres sont

manuellement synchronisés avec le texte de référence.

L’objectif principal visé par le projet est de permettre aux théâtres français de

pouvoir proposer leurs programmations à des publics non francophones et

d’ouvrir ainsi l’ensemble du répertoire théâtral en langue française aux 80 millions

de visiteurs étrangers qui viennent chaque année en France.

Le travail demandé consiste à réaliser une maquette fonctionnelle de lapplication

de synchronisation automatique. Basée sur la plateforme ALIZE de

reconnaissance du locuteur, cette application cherchera à reconnaitre en temps

réel lacteur en train de parler et utilisera la succession des prises de parole pour

synchroniser le live avec les sous-titres. Lapplication sera développée en C++ et

en python.

Des compétences solides en développement logiciel sont demandées. Des

connaissances en traitement automatique de la parole, en traitement du signal et

en apprentissage automatique sont également souhaitées, sans que cette liste

soit impérative ou exclusive.

Ce projet est présenté en partenariat étroit avec un centre spécialisé dans les

applications numériques pour le monde du théâtre. Ce partenaire apportera ses

compétences, les données servant à réaliser et à évaluer la maquette ainsi que

les théâtres partenaires (déjà identifiés) volontaires pour tester lapplication « in

situ ». Une poursuite en thèse de Doctorat est souhaitée.

Le stage est prévu pour le premier semestre 2016.

Back  Top

6-19(2015-12-17) Faculty position at the Associate,Professor level in machine learning, Telecom Paristech, France,

Faculty position at the Associate,Professor level in machine learning
applied to temporal data analysis.
Telecom ParisTech [1], CNRS - LTCI lab [2]

-- Important Dates (tentative)
?    February 15th, 2016: closing date
?    End of March: hearings of preselected candidates


Applications are invited for a permanent (indefinite tenure) faculty
position at the Associate
Professor level (Maitre de Conferences) in machine learning applied to
temporal data analysis.

-- Main missions

The hired associate professor will be expected to:

Research activities
?    Develop research in machine learning applied to temporal data
analysis that fits with the
topics of the Audio, Acoustics and Waves (AAO) group [3] and the Image
and Signal Processing
department [4], which include (and is not restricted to) audio,
physiological or video data analysis?
?    Develop both academic and industrial collaborations on the previous
topic, including
collaborative activities with other Telecom ParisTech research
departments and teams, and research
contracts with industrial players
?    Submit proposals to national and international research project calls

Teaching activities
?    Participate in teaching activities at Telecom ParisTech and its
partners (as part of joint
Master programs), especially in machine learning and signal processing,
including life-long training
programs (e.g. the local Data Scientist certificate)

Impact
?    Publish high quality research work in leading journals and conferences
?    Be an active member of the research community (serving in
scientific committees and boards,
organizing seminars, workshops, special sessions...)


-- Candidate profile

As a minimum requirement, the successful candidate will have:

?    A PhD degree
?    A track record of research and publication in one or more of the
following areas: machine
learning, signal processing
?    Experience in temporal data analysis problems (sequence prediction,
multivariate time series,
probabilistic graphical models, recurrent neural networks...)
?    Experience in teaching
?    Good command of English

The ideal candidate will also (optionally) have:
?    Knowledge in deep learning methods
?    Experience in distributed computing environments

Other skills expected include:
?    Capacity to work in a team and develop good relationships
withcolleagues and peers
?    Good writing and pedagogical skills

-- More about the position
?    Place of work: Paris until 2019, then Saclay (Paris outskirts)
?    For more information about being an Associate Professor at Telecom
ParisTech (in French), check [5]

-- How to apply
Applications are to be sent by e-mail to: recrutement@telecom-paristech.fr

The application should include:
?    A complete and detailed curriculum vitae
?    A letter of motivation
?    A document detailing past activities of the candidate in teaching
and research: the two types
of activities will be described with the same level of detail and rigor.
?    The texts of the main publications
?    The names and addresses of two referees
?    A short teaching project and a research project (maximum 3 pages)


-- Contact :
Slim Essid (Head of the AAO group)
Gaël Richard (Head of the TSI department)



[1] http://www.tsi.telecom-paristech.fr
[2] https://www.ltci.telecom-paristech.fr/?lang=en
[3] http://www.tsi.telecom-paristech.fr/aao/en/
[4] http://www.tsi.telecom-paristech.fr/en/
[5]
http://www.telecom-paristech.fr/telecom-paristech/offres-emploi-stages-theses/recrute-enseignants-chercheurs.html

Back  Top

6-20(2015-12-19) GENERAL EDUCATION INSTRUCTORS/Speech Communications , SC-Columbia, USA

 

Job Overview

JOB TITLE: GENERAL EDUCATION INSTRUCTORS/Speech Communications

JOB TYPE: Part-Time

LOCATION: US-SC-COLUMBIA

DEPARTMENT: Academics

SUPERVISORY: No

TRAVEL REQD: No

Job Description

If you’re a dedicated, enthusiastic, experienced speech communications professional, preferably with teaching experience, who believes in the power of sharing your knowledge, motivating others, and putting students first, we want to hear from you!

We’re looking for talented general education instructors to join the academic team at our Columbia Campus for our day and evening class sessions. These individuals will report to the Campus’s Degree Program Department Chairperson.

Essential Duties/Responsibilities:

Educates and trains students in his or her field of expertise using accepted and approved instructional methodology.

 Prepares lesson plans using industry-standard approaches (e.g., multimedia, adult learning methodology).

 Teaches courses as assigned, instructs and evaluates students, develops students’ skills and encourages growth, and tracks their attendance, performance, and grades.

 Participates in various administrative activities (e.g., attends faculty/staff meetings or in-service meetings).

 Participates in graduation ceremonies, as assigned.

 Participates regularly in continuing professional development activities.

 Performs other duties or special projects as assigned.

Education/Experience Needed:

A related master's degree from a regionally accredited institution, with 18 or more graduate hours in speech, speech communications, or mass communications.

 A minimum of four (4) years of experience in a related field.

 Registration, license, or certification as required by the state or accrediting agencies.

 Excellent interpersonal, organizational, and communications skills a must.

 Computer literacy and teaching experience desired.

Learn more about us at Remington College – Columbia Campus.

We offer a competitive salary, along with a comprehensive benefits package that includes health, dental, disability, life, vision, 401K, and flexible spending accounts, for full-time employees.

How to Apply

Help us train tomorrow’s work force! Qualified candidates: Please click the APPLY NOW button. Or, you may email your résumé and cover letter for consideration to audrey.breland@remingtoncollege.edu.

We provide reasonable accommodation where appropriate to applicants with disabilities.

Back  Top

6-21(2015-12-26) POST-DOC OPENING IN SPEECH INTELLIGIBILITY AT IRIT-TOULOUSE, FRANCE

POST-DOC OPENING
IN SPEECH INTELLIGIBILITY AT IRIT-TOULOUSE, FRANCE

****************************************************************

Title: Postdoctoral position in speech intelligibility

Application deadline: 1/31/2016

Description: The decreasing mortality of Head and Neck Cancers highlights the importance to reduce the impact on Quality of Life (QoL). But, the usual tools for assessing QoL are not relevant for measuring the impact of the treatment on the main functions involved by the sequelae. Validated tools for measuring the functional outcomes of carcinologic treatment are missing, in particular for speech disorders. Some assessments are available for voice disorders in laryngeal cancer but there are based on very poor tools for oral and pharyngeal cancers involving more the articulation of speech than voice.

In this context, the C2SI (Carcinologic Speech Severity Index) project proposes to develop a severity index of speech disorders describing the outcomes of therapeutic protocols completing the survival rates. There is a strong collaboration between linguists, phoneticians, speech therapists and computer science researchers, in particular those from the Toulouse Institute of Computer Science Research (IRIT), within the SAMoVA team (http://www.irit.fr/recherches/SAMOVA/).

Intelligibility of speech is the usual way to quantify the severity of neurologic speech disorders. But this measure is not valid in clinical practice because of several difficulties as the familiarity effect of this kind of speech and the poor inter-judge reproducibility. Moreover, the transcription intelligibility scores do not accurately reflect listener comprehension. Therefore, our hypothesis is that an automatic assessment technic can measure the impact of the speech disorders on the communication abilities giving a severity index of speech in patients treated for head and neck and particularly for oral and pharyngeal cancer.

The main objective is then to demonstrate that the C2SI, obtained by an automatic speech processing tool, produces equivalent or superior outcomes than a score of speech intelligibility obtained by human listeners, in terms of QoL foreseeing the speech handicap, after the treatment of oral and/or pharyngeal cancer. 

The database is actually recorded at the Institut Universitaire du Cancer in Toulouse with CVC pseudo-words, readings, short sentences focusing on prosody and spontaneous descriptions of pictures. 

Roadmap to develop an automatic system that will evaluate the intelligibility of impaired speech:

- Study existing SAMoVA technologies and evaluate them with the C2SI protocol,

- Find relevant features with the audio signal that support intelligibility,

- Merge those features to obtain the C2SI,

- Correlate it with the speech intelligibility scores obtained by human listeners,

- Study in which way the features support understandability as well.

Skills:

For this project, we are looking for one candidate with a PhD degree in the areas of machine learning, signal processing, and also withprogramming skills, scientific rigourcreativity, good publication record, excellent communication skills, enjoying teamwork...

Salary and other conditions of employments will follow CNRS (French National Center for Scientific Research) standard rules for non-permanent researchers, according to the experience of the candidate.

Locationthe work will be conducted in the SAMoVA team of the IRIT, Toulouse (France).

Contact: Jérôme Farinas jerome.farinas@irit.fr , Julie Mauclair julie.mauclair@irit.fr

Duration: 12 to 24 months

Candidates should email a letter of application, a detailed CV including a complete list of publications, and source code showcasing programming skills if available.

 
 
Julie Mauclair
Assistant Professor
IRIT
Toulouse, France
Back  Top

6-22(2016-01-08) Software/research engineer at LIMSI, Orsay, France

LIMSI (www.limsi.fr) is looking for a software/research engineer to work on the
design and development of new features for the CAMOMILE platform.

The CAMOMILE platform provides a REST API backend to support collaborative
annotation of multimedia documents. It has been successfully used for the
organization in 2015 of the MediaEval 'Person Discovery' challenge [1].

Features that need to be improved or added to the CAMOMILE platform includes:
- user authentication (currently based on cookies)
- real-time collaboration (e.g. using socket.io)
- metadata validation (e.g. using ValidateJS)
- interface between CAMOMILE and Amazon Mechanical Turk crowd-sourcing platform

This list is not exhaustive and the candidates is expected to be pro-active
in the choice of new features to implement and of the selected technology.

Applicants should be experienced in Node.js + MongoDB architecture. Python
proficiency is an asset as both Javascript and Python clients will need to be
kept synchronized with the CAMOMILE REST API.

The engineer will also support the organization of the 2016 edition of the
MediaEval 'Person Discovery' challenge, in particular through the design of
web front-end for registration, leaderboard, collaborative annotation of video
segments (see [1] for more details).

Candidates should send CV and motivation letter.
For more details on the position, please contact us.

Employer: Centre National de la Recherche Scientifique
Location: Orsay, France
Contract: 6 to 12 months contract (CDD), starting as soon as possible
Salary: between 24 k? and 35 k? gross yearly salary, depending on diplomas and experience
Contact: Hervé Bredin, bredin@limsi.fr

[1] https://github.com/camomile-project/LREC2016/blob/master/abstract.md


Hervé Bredin
Researcher
LIMSI, CNRS
bredin@limsi.fr

Back  Top

6-23(2016-01-09) Ingénieur R&D en CDI pour le département de la recherche de l'Ina, Paris


 *Institut national de l?audiovisuel*

Département recherche

Groupe de recherches audiovisuelles

Contact : Jean Carrive (jcarrive@ina.fr <mailto:jcarrive@ina.fr&gt;)

*Recrutement*

*Ingénieur R&D en CDI pour le département de la recherche de l?Ina*

*Analyse de l?audio et de la parole//*


  Missions

Dans le cadre des projets du Département de la recherche et dans le
cadre de projets transverses impliquant le Département, l?Ina recrute
un(e) ingénieur(e) Recherche et Développement chargé(e) de la mise en
?uvre, de l?intégration ou de l?adaptation de technologies d?analyse
automatique de contenus audiovisuels, et plus particulièrement de
technologies de traitement de la parole et d'analyse de l'audio.

Il/elle sera en particulier en charge de poursuivre les travaux déjà
initiés sur la reconnaissance et l'indexation des locuteurs (voir le
démonstrateur « SpeechTrax » en ligne sur l'Espace Recherche
http://recherche.ina.fr ainsi que les publications scientifiques
correspondantes). Ces travaux ont comme perspective applicative la
création d'un « Dictionnaire de voix » utile pour les tâches
d'indexation, de documentation, d'analyse et de fouille.

Il/elle sera amené(e) à collaborer avec l?ensemble des acteurs des
projets internes et externes du Département Recherche. Il aura la
responsabilité des développements informatiques liés à ces technologies
et participera à l?ensemble des tâches de spécification, conception et
rédaction de ces projets. Il sera également amené à participer à la
rédaction de réponses à des appels à projets de recherche et
développement, nationaux ou européens. Il/elle sera amené(e) à
participer à la rédaction de publications à caractère scientifique.

Il/elle contribuera avec les services opérationnels concernés et avec la
Direction des systèmes d'information à la mise en place à des fins
professionnelles d'un système de transcription automatique de la parole.
Il participera aux groupes de travail en charge de cette question.


  Activités principales

Dans ce cadre,  il/elle sera en charge de :

1/ Définir les axes de recherche et développement liés à l'analyse audio
et l'analyse de la parole :

  * Concevoir, implémenter, tester, évaluer des outils technologiques
    innovants dans le cadre des usages existants ou pressentis de
    l?Institut ;
  * Collaborer avec l?ensemble des acteurs internes et externes du
    département ;
  * Participer à la stratégie de recherche et développement du service.

2/ Assurer une R&D au service de l?Institut :

  * Proposer, préparer, coordonner et/ou participer à des projets de
    Recherche et Développement internes en lien avec les services
    opérationnels ;
  * Proposer, piloter et/ou participer à des actions de concertation et
    de réflexion internes dans le cadre de groupes de travail ;
  * Participer au déploiement de systèmes automatiques de la
    transcription de la parole en collaboration avec les services
    opérationnels de l?Ina et la Direction des systèmes d'information
    (DSI) : études des besoins, expertise technique, cahier des charges

3/ Réaliser des partenariats :

  * Proposer, préparer, coordonner, participer à des projets de
    Recherche et Développement collaboratifs, nationaux ou
    internationaux, à des instances de coopération scientifique et
    technologique (COMUE, Pôles de compétitivité, Groupes de recherche)
    en lien avec des partenaires académiques, institutionnels ou
    industriels ;

4/ Publier et diffuser des articles scientifiques :

  * Rédiger  des articles scientifiques et présenter ces articles dans
    des colloques ;
  * Démontrer les travaux de recherche lors de colloques, séminaire ou
    salons ;
  * Participer à la rédaction des documents liés à l?activité (rapports
    d?activité, livrables des projets en particulier).

5/ Assurer une veille technologique dans son domaine.

6/ Participer aux tâches de gestion des ressources informatiques et
techniques du service.

7/ Réaliser un reporting de son activité.

8/ Encadrer des stagiaires et à terme des doctorants.


  Qualifications, diplômes, expérience

  *   Justifier d'un diplôme supérieur (Doctorat)  dans le domaine de
    l'analyse automatique de la parole ou de l'analyse audio, ou d?un
    parcours professionnel admis en équivalence.


  Compétences

  * Maitrise des techniques d'analyse automatique de la parole,
    d'analyse audio, de traitement de signal, d'apprentissage
    automatique, de développement informatique ;
  * Connaissance de la recherche académique et/ou industrielle ;
  * Maîtrise des techniques de conduite de gestion de projets ;
  * Maitrise des techniques de reporting ;
  * Connaissance du paysage audiovisuel français et du monde académique ;
  * Maîtrise des outils bureautiques ;
  * Très bonne maîtrise de l?anglais écrit et parlé.


  Aptitudes

  * Rigueur, méthode et organisation ;
  * Capacité d?analyse et de synthèse ;
  * Qualités relationnelles ;
  * Qualité d?expression écrite et orale ;
  * Créativité et imagination ;
  * Gestion du temps et des priorités ;
  * Force de proposition ;

  * Sens du service clients et du résultat.


  Liaison hiérarchique

Rattachement hiérarchique au Chef de service Groupes de recherches
audiovisuelles.


  Rémunération

42-47 k? / an selon expérience

logo_ina



*Jean Carrive *

*Responsable adjoint du Département Recherche*

Direction déléguée à l'Enseignement, à la Recherche et à la Formation

Ligne directe : +33 1 49 83 *34 29* - jcarrive@ina.fr
< mailto:jcarrive@ina.fr&gt;



*institut-national-audiovisuel.fr *
< http://www.institut-national-audiovisuel.fr/>


Back  Top

6-24(2016-01-10) TTS research engineer at Nuance Shanghai, China

TTS Research Engineer – Nuance Shanghai, China

Reporting to TTS manager, the research engineer will conduct innovative research and development with focus on TTS Front-end or Back-end technologies.

Responsibilities:

-As part of the TTS R&D organization, you will be contributing to the development of text-to-speech technology for all types of markets and platforms with focus on Asian languages.

Representative tasks will include:

- Improve TTS front-end or back-end with algorithm innovations.

- Develop product and tools,

- Maintenance and support (PS / Bug fixes)

- Active contribution to the improvement of all QA processes

Qualification:

Native Mandarin, Good English, additional Asia languages is a plus

- Experience with TTS research and development

- Experience with NLP research and development

- Excellent scripting / programming skills

- Experience with SCM tools

- Self-starter, team player

- Passion for quality

- Innovative and curious - 'free thinker'

- Master degree in EE/CS/Comp. Ling (or similar)

Please send your resume application to eva.li@nuance.com

 

Back  Top

6-25(2016-01-11) Stage Master 2 de Recherche en Traitement automatique des langues/Extraction d'information, LIMSI Orsay, France

Stage Master 2 de Recherche en Traitement automatique des langues/Extraction d'information

Intitulé : Reconnaissance des Entités Nommées MÉDicales dans l'Oral (REMEDO)

Durée : 5 mois
Lieu : LIMSI-CNRS, Orsay, France
Rémunération : 554? par mois plus participation aux frais de transport en commun


*Contexte*
------------------------------
Devant l'augmentation toujours croissante de la masse de documents produits dans le
domaine médical, il devient de plus en plus difficile d'accéder aux informations
nécessaires au traitement et à la prise en charge des patients. Le recours à des méthodes
automatiques pour accéder aux informations contenues dans les textes devient alors
inévitable. Les méthodes d'extraction d'information sont aujourd'hui largement utilisées
afin d'identifier des données médicales comme des noms de patients, de médicaments ou de
maladies : 'La patiente <nom>Anne Onyme</nom> a été admise pour une <symptome>réaction
allergique</symptome> à la <traitement>pénicilline</traitement> le <date>21 janvier
2015</date>'.

Cette tâche se révèle toutefois particulièrement ardue lorsqu'il s'agit de traiter des
textes transcrits par des systèmes de reconnaissance de la parole. La qualité variable
des transcriptions automatiques et la variation terminologique compliquent la
reconnaissance des entités.


*Description du stage*
------------------------------
Nous posons l'exploitation de la dimension multimodale comme une piste d'amélioration des
systèmes d'extraction. Une hypothèse est que des paramètres acoustiques comme le rythme
ou l'intensité de la parole peuvent constituer des indices permettant d'aider le repérage
des entités nommées. Le but du stage sera d'éprouver cette hypothèse.

Le travail du stagiaire s'appuiera principalement sur les données issues de la tâche 1a
du challenge CLEF eHealth 2015, soit 200 enregistrements de dossiers de soins lus par une
infirmière ainsi que leur transcription annotée. NB : ces données sont en anglais, une
bonne connaissance de la langue est donc attendue.

Les tâches dévolues au stagiaire sont les suivantes :
 - rédiger un état de l'art sur la reconnaissance des entités nommées dans la parole
 - corriger les annotations préexistantes
 - développer une chaîne d'extraction d'entités nommées multimodale (qui s'appuiera
notamment sur le logiciel Wapiti)
 - utiliser des outils TAL et de traitement du signal pour extraire des traits multimodaux
 - évaluer et analyser l'influence des traits implémentés


*Profil recherché*
------------------------------
M2 Informatique ou linguistique avec parcours TAL

Compétences attendues :
 - Connaissances en programmation (langages de script)
 - Expérience avec des outils de TAL courants (étiqueteurs morphosyntaxiques, analyseurs
syntaxiques, ...) et avec des outils de traitement du signal (Praat)
 - Expérience des méthodes d'apprentissage automatique
 - Intérêt pour le traitement de l'audio et du texte
 - Compétences en anglais
 - Familiarité avec l'environnement Linux
 - Créativité et autonomie

NB : Aucune expérience du domaine médical n'est attendue.


*Encadrement*
------------------------------
Eva D'hondt
François Morlane-Hondère
Sophie Rosset
Pierre Zweigenbaum


*Pour candidater*
------------------------------
Merci d'adresser votre candidature avec un CV, une lettre de motivation ainsi que vos
notes de l'année universitaire en cours et de l'année dernière à Eva D'hondt
(eva.dhondt@limsi.fr) et François Morlane-Hondère (francois.morlane-hondere@limsi.fr)

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy
Back  Top

6-26(2016-01-14) Technical Engineer / Scientist at ELDA


    ELDA (Evaluations and Language resources Distribution Agency), a company specialized in Human Language Technologies within an international context, is currently seeking to fill an immediate vacancy for Technical Engineer/Scientist (Project Manager) position.

Technical Engineer / Scientist

Under the supervision of the technical development manager, the responsibilities of the Technical Engineer/Scientist include specifying, designing and implementing tools and software components for language resources production frameworks and platforms, carrying out language resources quality control and assessment, as well as developing web services and applications.

This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Technology field.

The task will mostly consist in participating in Web application development projects, language resources production projects, coordinating ELDA?s participation in R&D projects, while being also hands-on whenever required by the development team.

Profile:

-    PhD in Computer Science
-    at least 2 years of  experience in Natural Language Processing (or Information retrieval) and / or Web application development
-    Good knowledge of Linux and open source software
-    Proficiency in Python or other high-level dynamically-typed programming language, such as Ruby
-    Hands-on experience in Django; proficiency in Django-CMS is a plus
-    Good knowledge of Javascript and CSS
-    Knowledge of SQL and of an RDBMS (PostgreSQL preferred)
-    Good knowledge of Natural Language Processing
-    Dynamic and communicative, flexible to combine and work on different tasks
-    Ability to work independently and as part of a multidisciplinary team
-    Proficiency in French and English
-    Citizenship (or residency papers) of a European Union country

Applications will be considered until the position is filled. The position is based in Paris.

Salary: Commensurate with qualifications and experience.

Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to:

Khalid Choukri
ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Mail : job@elda.org

For further information about ELDA, visit:
http://www.elda.org

Back  Top

6-27(2016-01-20) Language Engineer at the TTS team of Google

The TTS team at Google is looking for a Language Engineer to help improve synthesis in English and French.


Based in Google London offices, you will be helping with technical tasks involved in creating a speech synthesizer.  This includes:


1. Developing rules for a text normalization system;

2. Large scale data mining;

3. Customizing language building tools for English and French.

4. Text-to-Speech quality evaluation and testing


Requirements:

1. Recent Computer Science graduate or closely related discipline

2. Native-level speaker in French/English and fluent in English.

3. Proficiency in Unix/Version Control System and a modern programming language (Python/C++ preferred)

4. Ability to build and understand regular expressions

5. Interest in data mining and natural language processing a plus


This is an opportunity to work on cutting edge technology in a dynamic team of world-class experts.


Project duration: 6-11 months (with potential for extension)

**This is not a permanent position but a contract position through an employment agency. Applicants must be currently authorized to work in the UK.**


For immediate consideration, please email your CV and cover letter in English (PDF format preferred) with 'Language Engineer English or French' in the subject line. 


Application Deadline: (Open until filled)


 

Email Address for Applications: tts_jobs@google.com

Back  Top

6-28(2016-01-21) Researcher in machine learning at Reykjavik University, Iceland

Machine learning and language technology

Reykjavik University is looking for ambitious candidates to work on development and
implementation of speech recognition and other language technologies. The development of
speech recognizers applies machine learning on big data sets of text and speech
recordings. The machine learning that is used in the projects is mostly implemented and
available in open source software. The work typically includes gathering and preparing
data, setting up and configuring the machine learning procedures, run experiments and
design the interface for users and other software. The positions are for one year with
possibility of extensions.

Specialist in machine learning
The main focus of this job is to apply machine learning on big text and speech datasets
to develop and train automatic speech recognizers. The software that is used in the
project is called Kaldi, which demands a high skill in the use of Linux and associated
tools. The work is to set up and evaluate computational models (finite state transducers,
hidden Markov Models and deep neural networks) on large datasets.

Skills:
? BSc/MSc degree in mathematics, statistics, engineering, computer science or similar.
? Knowledge of computational modeling is preferable (e.g. differential equations, neural
networks, linear models).
? Good knowledge of Linux is necessary.
? Good skills in writing and understanding shell scripts are preferable (bash, awk, sed).
? Good programming skills are necessary (e.g. C++, Java or Python).

Researcher in machine learning
The main focus of this job is to carry out research on speech recognition using deep
neural networks. The theoretical part of the work can either concentrate on parameter and
model optimization with respect to speech recognition performance or on learning setup
and model configuration with the aim to automate training of speech recognizers. The
group is already using open source speech recognition solutions of Kaldi, Tensorflow and
Theano. Some systems are already in operation while others are in the process of being
implemented. Design improvements and adaptation will continue in the coming months and
years, so the research will have a very direct practical impact.

Skills:
? MSc/PhD degree in applied mathematic, statistics, computational engineering or computer
science is preferable.
? Knowledge of mathematical modeling is preferable (e.g. differential equations, neural
networks, linear systems).
? Good skills in applying and analyzing algorithms.
? Good knowledge of Linux is preferable.
? Ability to use shell scripts is preferable (bash, awk, sed).
? Good programming skills are necessary (e.g. C++, Java or Python).

For further information contact Jón Guðnason, (jg@ru.is) Assistant Professor in School of
Science and Engineering.
Applications should be submitted before the end of 15th of April 2016 but strong
applications might be considered earlier.
Please submit your application through links provided at:
http://radningar.hr.is/storf/viewjobonweb.aspx?jobid=2729



Back  Top

6-29(2016-01-25) Poste de MCF au LIG (équipe GETALP) pour la recherche et au département I3L (informatique pour les lettres, langues et langage) de l'Univ. Grenoble Alpes (France)
Poste de MCF au LIG (équipe GETALP) pour la recherche et au département I3L (informatique pour les lettres, langues et langage) de l'Univ. Grenoble Alpes (UGA= université grenobloise fusionnée depuis 1/1/2016 regroupant les 3 établissements U. Joseph Fourier, U. Pierre Mendes-France et U. Stendhal) pour l'enseignement.
 
Short description of the position in English
'The associate professor position concerns informatics and speech processing. The selected candidate will join the GETALP group of LIG laboratory and reinforce the speech / spoken language processing axis.
Teaching  will be given in student programs  covering human and social sciences.
 
Intitulé du poste pour la publication : Informatique et traitement de l?oral
 
Composante ou service : département I3L (informatique pour les lettres, langues et langage)

Numéro du poste* : 0209

Section CNU  : 27-07

Unité de recherche ou unité mixte de recherche de rattachement (nom et n°) : LIG UMR 5217

Localisation du poste : Grenoble

Mots-clés renseignés pour la recherche du poste dans Galaxie par les candidats 

Traitement automatique des langues
Traitement de la Parole
Plurilinguisme
Interaction non-verbale
Communication homme-machine

Profil Enseignement 
Les objectifs pédagogiques sont de préparer l?ensemble des étudiants de Lettres-Langues-Langage à l?intégration du numérique dans leurs filières afin de faire face à la diversité, la synergie et l?évolution des services numériques et dispositifs d'interaction centrés-humains et des contextes d'usage. Les besoins en enseignement du département I3L se situent dans les domaines suivants : TAL, web dynamique, corpus électroniques, ingénierie linguistique, évaluation des outils du TAL, bureautique pour Lettres, Langues, Langage. Plus spécifiquement dans les masters, les enseignements devront être au plus près des activités de recherche et répondre aussi bien aux besoins de l?insertion professionnelle qu?à l?innovation en R&D.
Filières de formation concernées :
? Licence : Globalement dans le cadre des cours d'informatique (toutes les mentions sont concernées), plus spécifiquement dans le cadre du module « Métiers des humanités numériques »
? Master : Globalement dans les enseignements liés à l?informatique des masters des deux UFR LLASIC ET LE et plus spécifiquement dans le master Sciences du langage, spécialité Industries de la Langue (IDL) ;

La personne recrutée devra avoir une connaissance du tissu industriel du secteur et de son devenir, lui permettant de travailler à une bonne identification de compétences en adéquation avec les évolutions rapides des entreprises du domaine.


Profil Recherche

La personne recrutée intégrera l?équipe GETALP du LIG qui s'intéresse à tous les aspects théoriques, méthodologiques et pratiques de la communication et du traitement de l'information multilingue (écrite ou orale). GETALP porte par ailleurs un intérêt spécifique aux situations d?interactions et aux contextes atypiques (langues peu-dotées, locuteurs atypiques, relation sociale endommagée, etc.) en prenant en compte la diversité des langues, des locuteurs, des cultures et des relations socio-affectives. La pluridisciplinarité de GETALP (informaticiens, linguistes, phonéticiens, traducteurs, roboticiens, etc.) croise les approches expertes vs. empiriques et s?appuie sur des corpus langagiers de grande taille tout en développant des corpus annotés sur des hypothèses théoriques riches (« beautiful data »). Les aspects méthodologiques (évaluation, expérimentation écologique dans les plateformes Domus, le FabMSTIC ou in situ, éthique) sont centraux, en particulier dans le transfert vers les partenaires industriels. 
La personne recrutée devra permettre de renforcer les aspects interdisciplinaires de la recherche et participera à l?élaboration des méthodes d?évaluations dans une démarche éthique, concernant les processus d?expérimentation et les conséquences sociétales des innovations potentielles. Un point essentiel sera d?entretenir et développer les collaborations avec les autres équipes du LIG, autant sur les aspects informatiques et méthodologiques que sur les situations d?observation ou les applications. Plus largement la personne recrutée sera incitée à enrichir les collaborations avec les autres laboratoires impliqués dans le pôle de recherche ALLSHS de l?UGA.

Activités administratives
La personne recrutée pourra avoir en charge la responsabilité des enseignements I3L en licence, et participer aux responsabilités administratives dans l?UFR LLASIC.



Contact pour la recherche : 
BESACIER, Laurent Laurent.Besacier@imag.fr

Contact pour l?enseignement
AUBERGE, Véronique Veronique.Auberge@u-grenoble3.fr
 
dates audition: 23 Mai 2016
 
Back  Top

6-30(2016-01-30) PhD position in AVSR, Trinity College Dublin, Ireland

PhD position in AVSR at Trinity College Dublin, Ireland

With a link to this advert:

http://adaptcentre.ie/careers/tcd_phd_IG_PhD5_D.pdf

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA