ISCA - International Speech
Communication Association


ISCApad Archive  »  2015  »  ISCApad #199  »  Jobs

ISCApad #199

Sunday, January 18, 2015 by Chris Wellekens

6 Jobs
6-1(2014-08-10) Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Le Laboratoire d'Informatique d'Avignon (Université d'Avignon, lia.univ-avignon.fr)) et le Laboratoire d'Informatique Fondamentale de Lille (http://www.lifl.fr/) cherchent à pourvoir un poste d'ingénieur recherche et développement.

Dans la cadre du projet ANR MaRDi (Man-Robot Dialogue) il s'agit d'un **CDD de 12 mois** basé à Avignon (avec séjours ponctuels à Lille) axé sur le développement d'une plateforme de dialogue oral. L'architecture de la solution existante, basée sur des approches statistiques état de l'art, devra être complètement repensée afin d'améliorer sa modularité et aussi de permettre son exploitation sous forme de service web afin de développer les approches de collecte de données en ligne (crowdsourcing).

D'excellentes compétences en programmation et architecture logicielle sont attendues. Plus précisément les compétences recherchées incluent :
- Langage de programmation C++ ou Java ;
- Langages de script Python, Perl ;
- Développement Web : HTML/Javascript/CSS/XML, web services (REST...), serveurs d'application (Tomcat, Glassfish...) ;
- Traitement automatique de la langue : connaissances générales sur les systèmes d'interactions vocales (reconnaissance et interprétation de la parole, gestion du dialogue, génération de texte, synthèse de parole...).

Le salaire prévu est de 2400 euros brut / mois et peut varier selon le profil et l'expérience. Le poste est ouvert aux débutants (récents diplômés d'école d'ingénieur ou de Master, spécialités informatique ou 'linguistique - informatique' si couplée avec des connaissances significatives en informatique) mais aussi aux récents docteurs possédant de bonnes capacités en programmation.

Le profil possède une forte dominante développement, toutefois le travail attendu s’inscrit dans les activités de recherche de deux groupes de recherche très actifs et se prête à publications (post-doc possible). En fonction de la qualité de l'ingénieur/docteur recruté, les deux laboratoires bénéficient d'un fond constant de projets permettant d'envisager la prolongation du contrat à l'issue des 12 mois.

Les candidats peuvent envoyer un dossier (cv, motivation et éventuelles lettres de recommandation, en pdf) à fabrice.lefevre-_-à-_-univ-avignon.fr. La date de démarrage souhaitée est **septembre/octobre 2014**. L'offre reste valide jusqu'au recrutement d'un candidat.
==========================================================================

-- 
Fabrice Lefèvre, LIA-CERI-Univ. Avignon
BP 91228, 84911 Avignon Cedex 9, FRANCE
tel 33 (0)4 90 84 35 63/ fax - - - - 01
Back  Top

6-2(2014-08-18) Two postdoc positions in speech processing at the Department of Signal Processing and Acoustics, Aalto University, Finland.

Department of Signal Processing and Acoustics, Aalto University (formerly known as the Helsinki University of Technology), is looking for outstanding candidates for two postdoc positions:

 

 

 

Postdoc position in Computational Modeling of Language Acquisition

The speech technology group (led by Prof. Unto Laine) at Aalto University works on computational modeling of language acquisition, perception and production. The overall goal is to understand how spoken language skills can be acquired by humans or machines through communicative interaction and without supervision. The research in our topic involves cross-disciplinary effort across fields such as machine learning, signal processing, speech processing, linguistics, and cognitive science. The research is funded by the Academy of Finland.

We are currently looking for a postdoc to join our research team to work on our research themes, including:

 

  • pattern discovery from speech

  • articulatory modeling and inversion

  • modeling and methods for autonomous acquisition of lexical, phonetic and grammatical structure from speech input

  • multimodal statistical learning (associative learning between multiple input domains such as speech, articulation and vision).

 

Postdoc: 2 years. Starting date: as soon as possible.

 

Send your application, CV and references directly by email to

D.Sc. (Tech.) Okko Räsänen, okko.rasanen at aalto.fi

 

Postdoc position in Speech Synthesis and Voice Source Analysis

The speech communication technology research group (led by Prof. Paavo Alku) at Aalto University works on interdisciplinary topics aiming at describing, explaining and reproducing communication by speech. The main topics of our research are: analysis and parameterization of speech production, statistical parametric speech synthesis, enhancement of speech quality and intelligibility in mobile phones, robust feature extraction in speech and speaker recognition, occupational voice care and brain functions in speech perception.

We are currently looking for a postdoc to join our research team to work on the team’s research themes, particularly in the following topics:



  • statistical speech synthesis

  • voice source analysis

  • speech intelligibility improvement

 

Postdoc: 1-3 years. Starting date: January 2015

 

Send your application, CV and references directly by email to

Prof. Paavo Alku, paavo.alku at aalto.fi

 

All positions require a relevant doctoral degree in CS or EE, skills for doing excellent research in a group, and outstanding research experience in any of the research themes mentioned above. The candidate is expected to perform high-quality research and assist in supervising PhD students. Please send your application email with the subject line “Aalto post-doc recruitment, autumn 2014”.

In Helsinki you will join the innovative international computational data analysis and ICT community. Among European cities, Helsinki is special in being clean, safe, liberal, Scandinavian, and close to nature, in short, having a high standard of living. English is spoken everywhere. See, e.g., http://www.visitfinland.com/

 

 

Back  Top

6-3(2014-08-15) 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Positions: 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Starting:     September 1st, 2014
Duration:     12 months
Deadline for application:   As Soon As Possible

 

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic genre or mood recognition, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMRESA:

 

For this project IRCAM is looking for a researcher for the development of the technologies of automatic genre and mood recognition.

 

The hired Researcher will be in charge of the research and the development of scalable technologies for supervised learning (i.e. scaling GMM, PCA or SVM algorithms) to be applicable to millions of annotated data.
He/she will then be in charge of the application of the developed technologies for the training of large-scale music genre and music mood models and their application to large-scale music catalogues.
 
Required profile:
* High skill in audio indexing and data mining (the candidate must hold a PHD in one of these fields)
* Previous experience into scalable machine-learning models
* High-skill in Matlab programming, skills in C/C++ programming
* Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Position description 201406BMRESB:

 

For this project IRCAM is looking for a researcher for the development of the technologies of audio fingerprint.

 

The hired Researcher will be in charge of the research and the development of audio fingerprint technologies that are robust to audio degradations (sound capture through mobile-phones in noisy environment) and fingerprint search algorithms in large-scale database (millions of music titles).
 
Required profile:
* High skill in audio signal processing and audio fingerprint design (the candidate must hold a PHD in one of these fields)
* High skill in indexing technologies and distributed computing (hash-table, Hadoop, SOLR)
* High-skill in Matlab programming, skills in Python and Java programming
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Introduction to IRCAM:
 
IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

 

Salary:
According to background and experience

 

 

Applications:
Please send an application letter with the reference 201406BMRESA or 201406BMRESB togetherwith your resume and any suitable information addressing the above issues preferably by email to: peeters_at_ircam dot fr with cc to vinet_at_ircam dot fr, roebel_at_ircam dot fr

 

VERSION FRANCAISE:

 

 

Offre d’emploi : 2 postes de chercheur (H/F) à l’IRCAM pour technologies d’indexation audio à grande échelle
Démarrage : 1er Septembre 2014
Durée : 12 mois
Date limite pour candidature: Le plus rapidement possible

 

 

Le projet BeeMusic a pour objectif de décrire la musique à grande échelle (plusieurs millions de titres musicaux). Dans ce projet, IRCAM est en charge du développement des technologies de description du contenu audio (reconnaissance automatique du genre et de l’humeur musicale, identification audio par fingerprint …) pour des grands catalogues musicaux.

 

Description du poste 201406BMRESA:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies de reconnaissance automatique de genre et humeur.

 

Le/la chercheur(se) sera en charge de la recherche et des développements concernant la mise à l’échelle des technologies d’apprentissage supervisée (passage à l’échelle des algorithmes GMM, PCA ou SVM), afin de permettre leur application à des millions de données. Il/elle sera en charge de l’application de ces technologies pour l’entrainement de modèles de genre et humeur musicale ainsi que de leur application à des grands catalogues.

 

Profil requis:      
* Très grande expérience en algorithmes d’apprentissage automatique et en techniques d’indexation (le candidat doit avoir un PHD dans un de ces domaines)
* Expérience de passage à l’échelle des ces algorithmes
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation C/C++
* Bonne connaissance du traitement du signal (analyse spectrale, extraction de descripteurs audio, estimation de paramètres) 
* Bonne Connaissance des environnements Linux, Windows et Mac OS-X.
* Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participera aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Description du poste 201406BMRESB:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies d’identification audio par fingerprint.

 

Le/la chercheur(se) sera en charge de la recherche et du développement de la technologie d’identification audio par fingerprint robuste aux dégradations sonores (capture du son à travers un téléphone mobile en environnement bruité) et des algorithmes de recherche des fingerprints dans une très grande base de données (plusieurs millions de titres musicaux).

 

Profil requis:      
*Très bonne connaissance en traitement du signal et en conception d’audio fingerprint (le candidat doit avoir un PHD dans un de ces domaines)
*Très bonne connaissance en techniques d’indexation et de systèmes distribués (hash-table, Hadoop, SOLR)
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation Python et Java
*Bonne connaissance des environnements Linux, Windows et Mac OS-X.
*Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participeront aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Présentation de l’Ircam:

 

L'Ircam est une association à but non lucratif, associée au Centre National d'Art et de Culture Georges Pompidou, dont les missions comprennent des activités de recherche, de création et de pédagogie autour de la musique du XXème siècle et de ses relations avec les sciences et technologies. Au sein de son département R&D, des équipes spécialisées mènent des travaux de recherche et de développement informatique dans les domaines de l'acoustique, du traitement des signaux sonores, des technologies d’interaction, de l’informatique musicale et de la musicologie. L'Ircam est situé au centre de Paris à proximité du Centre Georges Pompidou au 1, Place Stravinsky 75004 Paris.

 

 

Salaire:
Selon formation et expérience professionnelle

 

 

Candidatures:
Prière d'envoyer une lettre de motivation avec la référence 201406BMRESA ou 201406BMRESB  et un CV détaillant le niveau d'expérience/expertise dans les domaines mentionnés ci-dessus (ainsi que tout autre information pertinente) à peeters_a_t_ircam dot fr avec copie à
vinet_a_t_ircam dot fr, roebel_at_ircam dot fr

 

 
Back  Top

6-4(2014-08-18) Deux postes en dialogue naturel chez Orange, Lannion

2 offres pour l'équipe NADIA de dialogue en langage naturel de Orange Labs à Lannion.

 

- CDI Ingénieur de recherche data scientist & analyse statistique (f/h) : http://orange.jobs/jobs/offer.do?joid=40611&lang=fr&wmode=light

 

- CDD Post doc Etude du dialogue vocal et tactile avec les objets connectés dans l'environnement domestique, depuis un 'wearable device' : http://orange.jobs/jobs/offer.do?joid=40954&lang=fr&wmode=light

 

Postes ouverts aux non francophones mais pas de description en anglais.

Back  Top

6-5(2014-08-25) Postdoctoral positions at the LIMSI-CNRS lab, Orsay, France

Postdoctoral positions are available at the LIMSI-CNRS lab. The
positions are all one year, with possibilities of extension.  We are
seeking researchers in machine learning and natural language processing.

Topics of interest include, but are not limited to:
- Speech translation
- Bayesian models for natural language processing
- Multilingual topic models
- Word Sense Disambiguation
- Statistical Language Modeling

Candidates must possess a Ph.D. in machine learning or natural
language/speech processing. Please send your CV and/or questions to
Alexandre Allauzen (allauzen@limsi.fr) and François Yvon
(yvon@limsi.fr).

Duration: 12 months, starting Fall or Winter 2014, with a possibility
to extend for an additional 12 months.

Application deadline: Open until filled

The successful candidates will join a dynamic research team working on
various aspects of Statistical Machine Translation and Speech
Processing. For information regarding our activities, see
http://www.limsi.fr/Scientifique/tlp/mt/

About the LIMSI-CNRS:
The LIMSI-CNRS lab is situated at Orsay, a green area 25 km south of
Paris. A suburban train connects Orsay to Paris city center. Detailed
information about the LIMSI lab can be found at http://www.limsi.fr

Back  Top

6-6(2014-08-28) poste d’ingénieur de recherche en CDI à Orange Labs à Rennes, France

Annonce de poste d’ingénieur de recherche en CDI
à Orange Labs à Rennes :

http://orange.jobs/jobs/offer.do?joid=40644&lang=fr

Il s’agit d’un poste de Data Scientist en analyse de données non
structurées : textes ou contenus multimédia de diverses natures (livre,
presse, tweets, photos, podcasts audio et vidéo...).

Back  Top

6-7(2014-09-09) Offre de contrat doctoral par le Laboratoire d'Excellence Fondements empiriques de la linguistique

*** extension de date limite au 5 octobre 2014 ***

Le LabEx EFL (Empirical Foundations of Linguistics) offre un contrat doctoral de 3 ans.

REDUCTION PHONETIQUE EN FRANCAIS

Le sujet proposé concerne la réduction phonétique en parole continue, la variabilité intra- et interlocuteur ainsi que les liens entre réduction phonétique, prosodie et intelligibilité de la parole.  

 Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Martine Adda-Decker et Cécile Fougeron, Directeurs de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle

 Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités se stature internationale, des formations, des colloques et des écoles d'été.

  • Conditions

- avoir une bonne maitrise de la langue française (parlée et écrite).

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

Le/la candidate devra avoir des connaissances et compétences en traitement de données acoustiques et/ ou articulatoires (ultrason, video, EGG…)  . Des connaissances en informatique et en analyses statistiques seraient un plus. 

  • Pièces à joindre pour la candidature
  1. un CV
  2. une lettre de motivation
  3. le mémoire de master 2
  4. une lettre de recommandation 
  5. le nom de deux référents (avec leurs adressse courriel)

Date limite de candidature: (20 septembre 2014 repoussé au **5 OCTOBRE 2014**

  • Présélection sur dossier et

Auditions 

des candidats présélectionnés

Les candidats présélectionnés seront auditionnés mi-octobre sur place ou par visio-conférence.  

Contact: 

Martine Adda-Decker, directeur de recherche CNRS

madda@univ-paris3.fr

Adresse pour la candidature: 

madda@univ-paris3.fr

ILPGA

19 rue des Bernardins

75005 Paris

 

Université Paris 3

 

Labex EFL: http://www.labex-efl.org/

Référence: http://www.labex-efl.org/?q=fr/node/261

 

 

Back  Top

6-8(2014-09-17) Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
University.

LIMSI is a multi-disciplinary research unit that addresses the automatic
processing of human language for a range of tasks.

LIMSI invites applications for 1 one-year Postdoctoral position in
Natural Language Processing.  The topic is as follows:

Dialogue management in a human-machine dialogue system where the
system plays the role of a patient during a medical consultation with
a doctor.

CONTEXT

The postdoctoral fellow will contribute to the following
project:

Patient Genesys (http://www.patient-genesys.com/): in the
framework of continuous medical education, the goal of the project is to
design and develop a framework for virtual patient-doctor consultation.
This is a collaborative project including a hospital and small and
medium enterprises.

JOB REQUIREMENTS

- Ph.D. in Computer Science, Natural Language Processing, Computational
  Linguistics
- Solid programming skills
- Strong publication record
- A good command of French is a plus
- Knowledge of medical terminologies and ontologies is a plus

ADDITIONAL INFORMATION

Net salary: between 2000 and 2400 ??? per month according to experience

Benefits: LIMSI offers a generous benefit package including health
insurance and 44 days vacation pa.

Duration: 12 months, renewable depending on performance and funding
availability

Start date: 1st October 2014

Location: Orsay, greater Paris area, France

TO APPLY

Please send:
* a cover letter
* a curriculum vitae, including a list of publications
* the names and contact information of at least two referees
to both:
     Sophie Rosset (rosset@limsi.fr)
     Anne-Laure (annlor@limsi.fr)
     Pierre Zweigenbaum (pz@limsi.fr)

Application deadline:  October 15th, 2014
Applications will be examined in the following week.

ABOUT LIMSI-CNRS

LIMSI is a laboratory of the French National Center for Research (CNRS),
a leading research institution in Europe.

LIMSI is a multi-disciplinary research unit that covers a number of
fields from thermodynamics to cognition, encompassing fluid mechanics,
energetics, acoustic and voice synthesis, spoken language and text
processing, vision, visualisation and perception, virtual and augmented
reality.

LIMSI hosts about 200 researchers, professors, research support staff
and graduate students. It is located in a green area about 30 minutes
south of Paris.
Back  Top

6-9(2014-09-18) Post-doc position at LORIA (Nancy, France)

 

Post-doc position at LORIA (Nancy, France)

Automatic speech recognition: contextualisation of the language model by dynamic adjustment

Framework of ANR project ContNomina

The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.

In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.

As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the post-doc student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.

Post-doc subject

The language model of the recognition system (n gram learned from a large corpus of text) is available. The problem is to estimate the probability of a new proper name depending on its context. Several tracks will be explored: adapting the language model, using a class model or studying the notion of analogy.

Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The postdoc will develop a new module whose function is to integrate new proper names in the language model.

Required skills

A PhD in NLP (Natural Language Processing), be familiar with the tools for automatic speech recognition, background in statistics and computer program skills (C and Perl).

Post-doc duration

12 months, start during 2014 (these is some flexibility)

Localization and contacts

Loria laboratory, Speech team, Nancy, France

Irina.illina@loria.frdominique.fohr@loria.fr

Candidates should email a letter of application, a detailed CV with a list of publications and diploma



Back  Top

6-10(2014-10-02) PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

http://www.eurecom.fr/en/content/multimedia-indexing

DESCRIPTION

This thesis is part of a collaborative project to analyze the multimedia information
that is published on the Internet about cultural festivals, either by professional
or by the public. These data are diverse (text, images, videos) and published on various
sources (twitter, blogs, forums, catalogs). The project aims at analyzing and structuring
this information in order to better understand the public and the cultural practices, and
to recombine them to build synthetic view of these collections. This thesis will focus
on the aspects of video analysis and multimodal fusion.

This thesis will tackle two problems:
- to develop techniques for the automatic analysis of video content, so that the collected
data may be categorized in predefined categories. This component will be used to structure
the collections and better understand their content and their evolution.
- to study mechanisms to recombine the multimedia content, and build synthetic views of the
collections. Several strategies will be studied, depending on whether these views are intended
for professional users or for the general public.

The research will focus in particular on mechanisms to automatically construct semantic classifiers,
and on fusion techniques for the results of these classifiers. The recombination aspects will
involve methods for the selection of important segments, followed by an assembly strategy according
to the expected objective. A specific attention will be paid to evaluation techniques that will
allow to measure the performance of different approaches.

APPLICATION

PhD applicants are expected to have a Master degree, with honors. This research
involves a strong knowledge in signal processing and machine learning, as well as
a good experience of programming in Matlab and C/C++.
English is mandatory, French is a plus, but not required.

Candidates are invited to submit a resume, transcripts of grades for at least the last two year,
3 references, and motivation for the position.
Applications should be sent to the Eurecom secretariat, secretariat@eurecom.fr, with the subject
MM_BM_PhD_REEDIT_Sept2014

EURECOM

EURECOM is a leading teaching and research institution in the fields of information and communication
technologies (ICT). EURECOM is organized in a consortium form combining 7 European universities
and 9 international industrial partners, with the Institut Mines-Telecom as a founding partner.
Our 3 fundamental missions are:
- Research: focused on Networking and Security, Multimedia Communications and Mobile Communications.
- High level education: with graduate and postgraduate courses in communication systems, plus
three Master of Sciences programs entirely dedicated to foreign students.
- Doctoral program: in cooperation with several doctoral schools, supported by our research collaborations
with various partners, both industrial and academic, and funded by various sources, including national
and european programs.

Back  Top

6-11(2014-10-05) Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis, Greece

Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis

As part of its commitment to continuously reinforce its excellence and strengthen its capacities in language technologies, the Institute of Language and Speech Processing (ILSP – http://www.ilsp.gr/en) of the 'Athena' Research and Innovation Center (http://www.athena-innovation.gr/en.html) announces a position for experience researchers in the area of:

Expressive and/or Multimodal Text-to-Speech Synthesis

The position is opened in the context of the LangTERRA project (www.langterra.eu) which is co-funded by the Seventh Framework Programme of the European Union (Grant Agreement No. 285924 FP7-REGPOT-2011-1). LangTERRA represents ILSP’s strong commitment to continuously reinforcing its excellence and strengthening its capacities and potential to excel at a European level in language technologies and related fields.

The candidates are expected to have a strong background and experience in one or more of the indicative topics in the list below:

n  Speech synthesis, analysis and feature extraction;

n  Capturing, analysing, modelling and synthesizing natural affective communication patterns with emphasis on reproducing them in the context of speech synthesis;

n  Applications that involve handling real affective speech, such as dialog systems

The selected experienced researchers will strengthen ILSP's research capacity by working as part of the respective ILSP team and bringing additional experience and know-how in this field. They are also expected to engage into networking with prominent research organizations throughout Europe, developing new collaborations and initiating proposals for funded research.

The required qualifications are:

n  a PhD in a field closely related to the position;

n  at least four years of full-time equivalent recent research experience with in-depth involvement in R&D projects;

n  a strong academic profile with high quality publications in international conferences and journals; and

n  a full professional proficiency in English.

The recruited experienced researchers will be offered a contract with ILSP. The duration of the contract will be 6 months, and could be extended depending on the availability of the necessary resources. The indicative monthly rate for the position will be 4.000 Euros depending on the status, qualifications and experience of the selected candidate.

The submitted applications must include:

n  A cover letter describing the special qualities of the applicant and her/his reasons for applying to the position;

n  Full curriculum vitae (CV), including a list of publications;

n  An electronic copy of the PhD degree; and

n  At least two recommendation letters.

The applicants should be available for a Skype interview. ILSP may come in contact with the persons providing the recommendation letters. Upon request, the applicant should be able to provide also copies of relevant diplomas and transcripts of academic records. The diplomas should be in English or Greek, else a formal translation in one of these languages should be provided.

The closing date for applications is October 24th, 2014 and will remain open until filled.

 

Submission:

The applications must be submitted electronically through email to both addresses below.

To: spy@ilsp.gr; vpana@ilsp.gr

The subject of the email should indicate: 'Application for LangTERRA – Expressive and/or Multimodal Text-to-Speech Synthesis'

 

Enquiries:

All enquiries related to the positions should be addressed to:

Dr. Spyros RAPTIS,
Email: spy@ilsp.gr,
Tel. +30 210 6875 -407, -300

 

More information is available at: http://www.ilsp.gr/el/news/75-jobs2/261-text2speech

 

Back  Top

6-12(2014-10-06) Two positions at M*Modal, Pittsburg, PA, USA

Speech Recognition Researcher

Location:  Pittsburgh, PA

Available: Immediately

 

About M*Modal:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate should have a working knowledge of both Speech Recognition and Computer Algorithms and be able to learn about new technologies from original publications. 
As a member of our Speech R&D team composed of Software Engineers and Speech Recognition Researchers, you will help define the algorithms and architecture used in our next generation of products.

 

Essential Functions:

The incumbent will be expected to use his/her programming skills to work with our team on research, technology development, and applications in speech recognition.

 

We offer work with:

  • Systems that make a real difference in the lives of patients and doctors

  • Language models, Acoustic models, Front-ends, decoders, etc, based on our own state-of-the-art modular recognition toolkit

  • Training data per speaker varying from seconds to hundreds of hours – both transcribed and unsupervised.

 

Qualifications:

  • MS or PhD in Electrical Engineering, Computer Science or related field

  • Top-level awareness of all aspects of large vocabulary speech recognition

  • In-depth understanding of several components of state-of-the-art speech recognition

  • C++, Java, Perl, or Python programming

  • Experience with both Linux and Windows

  • Experience participating in large software development projects.

  • Experience with large-scale LVCSR projects or evaluations is a definite plus

  • Enthusiasm to work directly on a real production system

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

 

 

Language Developer

Location:  Pittsburgh, PA

Available:  Immediately

 

 

 

About Us:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate would help to improve our speech recognition systems based on our own state-of-the-art recognizer toolkit. This involves processing text for building language models for different applications, experimenting with different types of language models, training locale-optimized acoustic models, improving pronunciation dictionaries, and testing, tuning and troubleshooting speech recognition systems. As we work with very large amounts of data, data processing is run across a Linux cluster.

 

Qualifications:

  • BS or MS degree in Computer Science or related field

  • Background in related specialty, such as linguistics, machine learning or statistics

  • Programming skills, such as Java or Python

  • Working knowledge of Linux and Windows environments

  • Working knowledge of regular expressions

  • Prior experience with automatic speech recognition systems is of course a plus

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

Back  Top

6-13(2014-10-08) Informaticien TAL à l'INSERM (CépiDc) val de Marne France

 

Le CépiDc est situé à l’hôpital du Kremlin-Bicêtre (Val de Marne). Il a pour missions principales de produire les données nationales de mortalité par cause, de diffuser, d'assister les utilisateurs et de mener des recherches sur ces données.

Le CépiDc est centre collaborateur OMS pour la Famille des Classifications Internationales (FCI) en langue française.

Le Centre d'épidémiologie sur les causes médicales de décès (CépiDc) de l’Inserm recrute :

Un informaticien en traitement automatique du langage (TAL)

Description du poste

Contexte :

La production de la statistique des causes médicales de décès se fonde sur la réception de près de 550 000 certificats de décès par an, dont environ 6% sont transmis par voie électronique (via www.certdc.inserm.fr). Cette proportion devrait augmenter sensiblement dans un futur proche.

Les certificats papiers et électroniques ont le même format structuré, conforme au modèle préconisé par l'OMS. Bien que la structure du certificat incite les médecins à séparer des entités nosologiques (des maladies, états morbides ou traumatismes), le texte rédigé est relativement libre et nécessite dans la majorité des cas un traitement automatique de standardisation. Celui-ci vise à bien séparer les entités nosologiques, à reconstituer leur ordre de causalité et à corriger les fautes d'orthographe. Après standardisation, un code de la classification internationale des maladies (CIM) est attribué à chaque entité nosologique à l'aide d'un index (comptant environ 160 000 entrées aujourd'hui).

Alors que le texte des certificats papiers est manuellement saisi et standardisé par une entreprise externe au service, le texte des certificats électroniques fait uniquement l'objet d'application de règles syntaxiques simples, qui rendent nécessaire et conséquent un traitement manuel du texte avant l’exploitation par Iris.

Missions

Dans le cadre de la production de la base des causes médicales de décès, l'agent aura pour missions principales :

- le suivi de la qualité de la saisie des certificats de décès,

- l'automatisation du traitement du texte médical pour l’accélérer et améliorer sa qualité,

- la participation à l’alerte sanitaire.

Activités

- Assurer le suivi du marché externalisé de saisie des certificats de décès,

- Développer les règles de traitement automatique du texte médical avec les outils existant dans le service,

- Lister les modifications nécessaires non prises en charge par les règles de traitement automatique du langage proposées par les outils existants,

- Participer à une revue des méthodes existantes de traitement automatique du langage à mobiliser pour prendre en charge ces modifications,

- Mettre en oeuvre et tester différentes méthodes de traitement automatique du langage, maximisant la proportion de texte standardisé et minimisant la proportion d’erreurs provoquée par le traitement

- Mettre à jour la liste des expressions présentes dans l’index afin de minimiser sa taille, de faciliter sa maintenance et de pouvoir ainsi le transmettre à d’autres pays francophones.

Spécificité du poste

- Les données traitées par le CépiDc sont de nature médicale et strictement confidentielle.

Profil recherché

Connaissances :

- Des méthodes de traitement automatique du langage (TAL) : grammaires formelles, syntaxe formelle, analyse syntaxique automatique,

- Des langages de programmation (C, Perl, Python…) et de gestion de bases de données (SQL),

- Lecture de l'anglais scientifique.

Savoir-faire :

- Développement et adaptation de méthodes TAL à une nouvelle problématique,

- Evaluation des performances obtenues par les méthodes,

- Rédaction de documentation méthodologique (rapport, article),

- Gestion des relations avec un prestataire extérieur.

Aptitudes :

- Capacité de formalisation de problématique de traitement du texte,

- Capacité à travailler en équipe avec des acteurs variés (médecins, nosologistes, statisticiens, épidémiologistes),

- Rigueur,

- Esprit d'initiative.

Contrat proposé

Contrat à durée déterminée : temps plein de 12 mois renouvelable

Rémunération : entre 2 031 et 2 465 € bruts selon l’expérience et le niveau de formation par référence aux grilles de l’Inserm

Date de prise de fonction : 01/12/2014

Formation

BAC +3/5 en linguistique informatique, spécialité traitement automatique du langage (Licence, Master, école d’ingénieur…).

Expérience professionnelle souhaitée :

Débutant accepté

Pour postuler, merci d’envoyer CV et lettre de motivation à : Grégoire Rey

Directeur du CépiDc de l'Inserm

gregoire.rey@inserm.fr

Tel : 01 49 59 18 63

Back  Top

6-14(2014-10-11) Disney Research - Open positions for postdoc candidates and internships for PhD students, Pittsburgh, PA, USA

Disney Research - Open positions for postdoc candidates and internships for PhD students

 

Disney Research, Pittsburgh, is announcing several positions for outstanding postdoctoral candidates and internships for PhD students in areas related to speech technology, multimodal conversational systems, interactive robotics, child-robot interaction, human motion modelling, tele-presence, and wireless computing. Candidates should have experience in building interactive systems and the ability to build robust demonstrations.

 

Positions are available immediately, with flexible starting dates before the beginning of 2015. A detailed description of the positions and about Disney Research more generally is given below. Interested candidates should send an email with an up-to-date CV and any questions to drpjobs-sp@disneyresearch.com. Please make sure to use subject line: DRP-SP-2014.

 

Postdoctoral positions:

Postdoctoral positions are for 2 years. Candidates should have an outstanding research record, have published in top-tier journals and international conferences, and have shown impact on the research in their field. Candidates must have excellent command of English and a strong collaborative and team-oriented attitude. Postdoctoral positions are for one or more of the following areas:

 

  • ·       Multimodal spoken dialogue systems
  • ·       Adult- and Child-Robot Interaction
  • ·       Sensor fusion and multimodal signal processing
  • ·       Embodied conversational agents and language-based character interaction

 

All candidates should have excellent programming skills in scripting languages and in one or more object-oriented programming language. Preferred candidates will also have strong applied machine learning skills, and experience in data collection and experiment design with human subjects.

 

Internships for PhD students:

A number of internships are available for international PhD students in one of the following areas. The positions are full-time for 4-6 months available immediately. Candidates should be enrolled in a PhD program in Computer Science, Electrical Engineering, or related discipline. Applicants must have at least one publication in a top-tier conference, have excellent written and oral communication skills, enthusiastic, self-motivated, and enjoy collaborative and team work.

We have opportunities for internships in a variety of fields including:

 

  • ·       Nonverbal signal analysis and synthesis for human-like animated characters
  • ·       Telepresence and Tele-Communication in human-humanoid interaction
  • ·       Speech recognition applications for children
  • ·       Multimodal and incremental dialogue systems
  • ·       Kinematics, biomechanics, human motion modelling, and animatronics

 

Disney Research Labs (Pittsburgh, Boston, LA, and Zurich) provide a research foundation for the many units within the Walt Disney Company. Including Walt Disney Feature Animation, Walt Disney Imagineering, Parks & Resorts, Walt Disney Studios Motion Pictures, Disney Interactive Media Group, ESPN, and Pixar Animation Studios.

 

Disney Research combines the best of academia and industry: we work on a broad range of commercially important challenges, we view publication as a principal mechanism for quality control, we encourage and help with the engagement with the global research community, and our research has applications that are experienced by millions of people.

 

Disney Research Pittsburgh is made of a group of a world-leading researchers working on a very wide range of interactive technologies. The lab is co-located with Carnegie Mellon University under the direction of Prof. Jessica Hodgins. Members of DRP are encouraged to interact with the established research community at CMU and with the business units in Los Angeles and Florida. As an active member of the research community we support and assist with publications at top venues.

 

Disney Research provides very competitive compensations, benefits, and relocation help.

 

The Walt Disney Company is an Affirmative Action / Equal Opportunity Employer and encourages applications from members of under-represented groups.

 

http://www.disneyresearch.com

Back  Top

6-15(2014-10-11) 1 PhD/Postdoctoral position, Saarland University, Saarbruecken, Germany

1 PhD/Postdoctoral position in DFG-funded CRC “Information Density and Linguistic Encoding” (SFB 1102), Saarland University, Saarbruecken, Germany

Deadline for Applications:*Oct 31, 2014*

The DFG-funded CRC “Information Density and Linguistic Encoding” is pleased to invite applications for a PhD/post-doctoral position within the project 'B1: Information Density and Scientific Literacy in English', start date as soon as possible. If not filled until the Oct 31, later applications will be considered until the position is filled.

SFB1102, B1.A: Postdoctoral researcher or PhD student, computational linguist or computer science

A central methodological aspect of the project is to train and apply language models as well as data mining and other machine learning techniques to investigate the diachronic linguistic development of English scientific writing (from 17th century to present). The successful candidate will work on adapting, modifying and extending standard techniques from language modeling and machine learning to incorporate linguistically motivated and interpretable features.

Requirements: The successful candidate should have a PhD/Master's in Computer Science, Computational Linguistics, or related discipline with a strong background in language modeling and machine learning (especially data mining). Good programming skills and knowledge of linguistics are strong assets. Previous shared work with linguists is desirable. A good command of English is mandatory. Working knowledge of German is desirable.

The project is headed by Prof. Dr. Elke Teich, Dr. Noam Ordan and Dr. Hannah Kermes (http://fr46.uni-saarland.de/index.php?id=teich) and carried out in close collaboration with Prof. Dr. Dietrich Klakow:
http://www.lsv.uni-saarland.de/klakow.htm

For further information on the project, see
http://www.sfb1102.uni-saarland.de/b1.php

The appointments will be made on the German TV-L E13 scale (65% for PhD student, 100%
for postdoctoral researcher; see also
http://www.sfb1102.uni-saarland.de/jobs.php). Support for travel to conferences is also available. *Priority will be given to applications received by October 31, 2014*. Any inquiries concerning the post should be directed to the e-mail address below.
Complete applications  quoting “SFB1102, B1.A” in the subject line should include (1) a statement of research interests motivating why you are applying for this position, (2) a full CV with publications, (3) scans of transcripts and academic degree certificates, and (4) the names and e-mail addresses of three referees and should be e-mailed as a single PDF to:

Prof. Dr. Elke Teich
e-mail:
e.teich@mx.uni-saarland.de

Back  Top

6-16(2014-10-13) Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences), Paris, FR

Job announcement: lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences)

* équivalent Lecturer (UK) / Assistant Professor (USA)

Paris Diderot University will open a lecturer position in English phonetics, phonology and morpho-phonology for September 2015, pending budgetary approval.

Candidates are expected to have a Ph.D. in English Linguistics with a specialization in phonetics or phonology. Candidates should either already hold a tenured lecturer position or apply for accreditation by the French Conseil National des Universités. Note that the deadline for the first step of application for CNU accreditation is October 23rd, 2014.

Candidates should have expertise in the areas of English phonetics and phonology, with research interests in the morphology-phonology or morphology-phonetics interface. Those whose record of research relates to one or more of the following areas are particularly encouraged to apply: second language acquisition, quantitative, statistical or computational methods in linguistic research, psycholinguistic experimentation, laboratory phonology, and/or sociolinguistics. A working knowledge of French is expected.

The successful candidate is expected to join the Centre de Linguistique Interlangue, de Lexicologie, de Linguistique Anglaise et de Corpus - Atelier de recherche sur la parole (CLILLAC-ARP) and the department of English :

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVR&page=LesactivitesdeCLILLACARP

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVU&page=ACCUEIL&g=m

 

Teaching responsibilities will include undergraduate courses in phonetics, phonetic variation and intonation and graduate courses in phonetics and in the areas of the appointee's specialization. Other duties include supervision of graduate students, involvement in curricular development and in the administration of the department.

 

This position is a permanent one with a civil servant status. Salary will be in accordance with the French state regulated public service salary scale.

 

Any potential candidate should bear in mind that the deadline for registration for the national CNU “qualification” is October 23rd, 2014 on the Galaxie website of the French 'Ministère de l'Enseignement Supérieur':

https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/cand_qualification.htm

The position, if opened, will start on September 1st 2015 and the websites for applications (French Ministry of Higher Education and Université Paris Diderot) will be open in spring 2015.

 

Web Address for Applications: http://www.univ-paris-diderot.fr

Contact Information:

Prof. Agnès Celle

agnes.celle@univ-paris-diderot.fr

+33 1 57 27 58 67

Back  Top

6-17(2014-10-15) Post-doctoral position (12 months) GIPSA-lab, Grenoble, France

Post-doctoral position (12 months)  GIPSA-lab, Grenoble, France

Incremental text-tp-speech synthesis for people with communication disorders

 

Duration, location and staff

The position is open from January 2015 (until filled) for a duration of 12 months. The work will take place in

GIPSA-lab, Grenoble, France, in the context of the project SpeakRightNow.

Researchers involved: Thomas Hueber, Gérard Bailly, Laurent Girin and Mael Pouget (PhD student).

Context

SpeakRightNow project aims at developing an incremental Text-To-Speech system (iTTS) in order to

improve the user experience of people with communication disorders who use a TTS system in their daily life.

Contrary to a conventional TTS, an iTTS system aims at delivering the synthetic voice while the user is typing

(eventually with a delay of one word), and thus before the full sentence is available. By reducing the latency

between text input and speech output, iTTS should enhance the interactivity of communication. Besides, iTTS

could be chained with incremental speech recognition systems, in order to design highly responsive speech-tospeech

conversion system (for application in automatic translation, silent speech interface, real-time

enhancement of pathological voice, etc.).

The development of iTTS system is an emerging research field. Previous work mainly focused on the

online estimation of the target prosody from partial (and uncertain) syntactic structure [2], and the reactive

generation of the synthetic waveform (as in [3] for HMM-based speech synthesis). The goal of this post-doctoral

position is to propose original solutions to these questions. Depending on the his/her background, the recruited

researcher is expected to contribute on one or more of the following tasks:

1) Developing original approaches to address the problem of incremental prosody estimation using

machine learning techniques for predicting missing syntactic information and driving prosodic

models.

2) Implementing a prototype of an iTTS system on a mobile platform. The system will be adapted

from the HMM-based TTS system currently developed at GIPSA-lab for French language.

3) Evaluating the prototype in a clinical context in collaboration with the medical partners of the

SpeakRightNow project.

Keywords: assistive speech technology, incremental speech synthesis, prosody, machine learning, handicap.

Prerequisite: PhD degree in computer science, signal processing or machine learning. A background in HMMbased

speech synthesis and/or development on iOS/Android platform is a plus.

To apply: Applicants should email a CV along with a brief letter outlining their research background, a list of

two references and a copy of their two most important publications, to Thomas Hueber (thomas.hueber@gipsa-lab.

fr).

References:

[1] Baumann, T., Schlangen, D., “Evaluating prosodic processing for incremental speech synthesis,” in Proceedings of

Interspeech, Portland, USA, Sept. 2012.

[2] Astrinaki, M., d’Allessandro, N., Picart, B. Drugman, T., Dutoit, T., “Reactive and continuous control of HMM-based

speech synthesis,” in Proceedings of IEEE Workshop on Spoken Language Technology Miami, USA, Dec. 2012.

Back  Top

6-18(2014-10-20) ​Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences), Paris, France

Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences)

* équivalent Lecturer (UK) / Assistant Professor (USA)

Paris Diderot University will open a lecturer position in English phonetics, phonology and morpho-phonology for September 2015, pending budgetary approval.

Candidates are expected to have a Ph.D. in English Linguistics with a specialization in phonetics or phonology. Candidates should either already hold a tenured lecturer position or apply for accreditation by the French Conseil National des Universités. Note that the deadline for the first step of application for CNU accreditation is October 23rd, 2014.

Candidates should have expertise in the areas of English phonetics and phonology, with research interests in the morphology-phonology or morphology-phonetics interface. Those whose record of research relates to one or more of the following areas are particularly encouraged to apply: second language acquisition, quantitative, statistical or computational methods in linguistic research, psycholinguistic experimentation, laboratory phonology, and/or sociolinguistics. A working knowledge of French is expected.

The successful candidate is expected to join the Centre de Linguistique Interlangue, de Lexicologie, de Linguistique Anglaise et de Corpus - Atelier de recherche sur la parole (CLILLAC-ARP) and the department of English :

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVR&page=LesactivitesdeCLILLACARP

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVU&page=ACCUEIL&g=m

 

Teaching responsibilities will include undergraduate courses in phonetics, phonetic variation and intonation and graduate courses in phonetics and in the areas of the appointee's specialization. Other duties include supervision of graduate students, involvement in curricular development and in the administration of the department.

 

This position is a permanent one with a civil servant status. Salary will be in accordance with the French state regulated public service salary scale.

 

Any potential candidate should bear in mind that the deadline for registration for the national CNU “qualification” is October 23rd, 2014 on the Galaxie website of the French 'Ministère de l'Enseignement Supérieur':

https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/cand_qualification.htm

The position, if opened, will start on September 1st 2015 and the websites for applications (French Ministry of Higher Education and Université Paris Diderot) will be open in spring 2015.

 

Web Address for Applications: http://www.univ-paris-diderot.fr

Contact Information:

Prof. Agnès Celle

agnes.celle@univ-paris-diderot.fr

+33 1 57 27 58 67

Back  Top

6-19(2014-10-25) Postdoc positions in multimodal video recommendation, Aix-Marseille University, France
Postdoc positions at Aix-Marseille University in multimodal video recommendation
 
Application deadline: 11/31/2014
 
Description:
 
The ADNVIDEO project, funded in the framework of A*MIDEX (http://amidex.univ-amu.fr/en/home), aims at extending multimodal analysis models. It focuses on jointly processing the audio stream, the speech transcript, the image flow, scenes, the characterization of text overlays and user feedback.
 
Using as starting point the corpus, annotations and approaches developed during the REPERE challenge (http://defi-repere.fr), this project aims at going beyond indexing on a single modality by incorporating information retrieval methods, not only from broadcast television, but more generally on video documents requiring multimodal scene analysis. The novelty here is to combine and correlate information from different sources to enhance the qualification of content. The application for this project relates to the issue of recommendation applied to videos: given a video, the system finds documents (text, image, and video) related to this video, either on the surface level or the meaning level. In particular, the use case considered may have, in terms of technology transfer, significant economic benefits, regarding automatic ad targeting: automatically find the most relevant advertising with respect to the content of a video.
 
Objectives:
 
The candidate will participate in the development of a prototype for videos recommendation, leading to  a technology transfer towards business:
 
* Extraction of multimodal low-level descriptors. These descriptors correspond to speech, image and sound / music.
* Extraction of multimodal high-level descriptors. These semantic-oriented descriptors are extracted from low-level descriptors.
* Aggregation of multimodal descriptors to form the multimodal footprint of the video.
* Matching videos and promotional material.
* Validation of the video recommendation prototype.
* Participation to the scientific life of the lab, including paper publication.
 
The allocation of tasks can be performed depending on the skills of the candidate.
 
Skills:
 
For this project, we are looking for two candidates with a PhD degree in the areas of information retrieval, natural language processing, machine learning or video analysis:
 
* Strong programming skills (C++, Java, Python?).
* Desire to produce functioning end-to-end systems, life-scale live demos
* Scientific rigor
* Imagination
* Top notch publications
* Excellent communication skills
* Enjoy teamwork
 
Candidates must presently work outside of France.
 
Location:
 
University of Aix-Marseille, LIF (http://www.lif.univ-mrs.fr) and LSIS laboratories (http://www.lsis.org) and the company Kalyzee (http://www.kalyzee.com).
 
Contact:
 
 
Candidates should email a letter of application, a detailed CV including a complete list of publications, and source code showcasing programming skills.
Back  Top

6-20(2014-12-05) R/D voice INTER GmbH in Dresden , Germany

voice INTER connect GmbH with head office in Dresden, Germany, is a medium-sized enterprise with over 10 years experience in the field of embedded signal processing and speech control in electronic devices. For well-known customers we offer innovative high-tech products and customised development and integration services.

We are looking for a Research & Development Engineer with specialisation in the field of speech recognition to join our team at the next possible date.

Your Tasks

 

*Development, customisation and maintenance of our own products within the area of speech recognition

*Integration and customisation of third party speech technology

*Collaboration in projects concerning automatic speech recognition and speech to text

*Development and functional testing of algorithms in HMM- or FSM-based speech recognizers

*Evaluation of Large Vocabulary Continuous Speech Recognition Systems (LVCSR) and Client-Server-based speech recognition solutions

Your Profile

 

 

*Master-/PhD-degree in electrical engineering or computer science, specialisation speech communication

*Detailed knowledge of speech recognition architecture and technologies proved by a diploma or doctoral thesis in the area of speech technology

*Knowledge of the current literature and state of science and technology within the area of speech processing and speech recognition

*Excellent knowledge and experience in the programming languages C und C++ focused on audio and speech processing on PC and embedded environments (Windows, Linux, embedded operating systems)

*Knowledge of architecture, evaluation strategies, tools and quality criteria of LVCSR and client-server-based speech recognition solutions

*Experiences in dialogue design and NLU processing

*Preferably prior knowledge and deep interest in semantic processing and machine learning

*Preferably prior knowledge in customer-specific project work and team-oriented software development

*Very good skills in mathematics and physics

*Pleasure in transferring algorithms into practical solutions and in realisation of ambitious algorithmic tasks

*Pleasure in overcoming practical hurdles for speech technology like noise, reverberation and echoes

*Independence and initiative

*Creative, solution-oriented way of working and quick learner

 

We offer

At voice INTER connect you will find interesting opportunities for personal fulfilment in a growth-orientated, innovative company and a creative, highly motivated and open-minded team. Besides ambitious projects for international customers we offer the opportunity for further education by attending conferences, lectures and courses.

Your qualification and performance are rewarded by an appropriate remuneration.

If you are interested in joining our team, please apply via post or (preferred) e-mail:

voice INTER connect GmbH, Ammonstr. 35, 01067 Dresden or jobs@voiceinterconnect.de

Back  Top

6-21(2014-12-05) Master thesis with available Ph.D scholarship ar GIPSA-Lab, Grenoble, France

Master thesis with available Ph.D scholarship:

Efforts and coordination of speech gestures

Disciplines: Acoustic phonetics, Biomechanics, Physiology, Cognition

Laboratory: GIPSA-lab, Grenoble

Supervision: Maëva Garnier, Pascal Perrier, Franck Quaine

Contact: maeva.garnier@gipsa-lab.grenoble-inp.fr / +33 4 76 57 50 61

Context: This Master thesis and its possible continuation in Ph.D thesis, take part to the ANR project

StopNCo, dealing with the characterization and understanding of the physiological efforts and the

gesture coordination in speech production1. Stop consonants (/p/, /t/, /k/, /b/, /d/ or /g/) are of

particular interest for the study of speech motor control, as they require a precise coordination of

breathing, laryngeal and articulatory gestures in their force and timing.

General questions: Stop consonants

are created by an occlusion of the vocal

tract that can occur at 3 different “places

of articulation” in French: at lips (for /p/

and /b/), just behind the teeth (for /t/ and

/d/) or at the back of the palate (for /k/

and /g/) (see Figure). The release of this

occlusion creates a short explosion noise

(or “burst”) and a quick variation in

frequency of the vocal tract resonances

(“formant transients”). These acoustic

features differ significantly between the 3

places of articulation

The objectives of the project are to characterize and to model:

1. by which coordination of breathing and articulatory gestures we control the finer variation of these

acoustic cues (burst spectrum and formant transients)?

2. how these cues are modified when speakers speak more clearly and try to enhance the perceptual

contrast between these 3 places of articulation?

3. how this control develops in children and can dysfunction in some of them?

4. how this control can vary in efficiency, i.e. in the ratio between the acoustic outcomes and the

physiological efforts?

Master project: Development and test of methodologies to measure lip and tongue

articulation efforts

The first step of the project will consist in implementing new methodologies to measure lip and

tongue articulation efforts, using surface electromyography (EMG), force sensors and electromagnetic

articulography (EMA) (see next figure).

1

see http://www.agence-nationale-recherche.fr/en/anr-funded-project/?tx_lwmsuivibilan_pi2%5BCODE%5D=ANR-14-CE30-

0017

Multiple EMG electrodes will be placed

around the lips to characterize the muscle

activity in different speech movements

and to find global descriptors of the

degree of articulation effort and fatigue.

Force sensors will be used, searching their

optimal number and position on the lips

and palate. We will also try to characterize

the tongue and lips stiffness in order to

take account of it in the calibration of the

force measurements.

Finally, the articulation force estimated by

these two methodologies will be

confronted to the velocity peaks of

tongue and lip movements measured with

EMA, as well as to the perceptual selfevaluation

of the effort spent by the

subject.

Possible continuation in Ph.D

The Ph.D thesis will base on these methodologies to characterize the coordination of breathing,

laryngeal and articulatory gestures in the production of stop consonants in healthy adult speakers. A

large database will be recorded with synchronous physiological and acoustical signals, on several

speakers, in controlled laboratory conditions, and for a variety of voice qualities and efforts (whisper to

shout, slow to fast speech rate, etc.). Using statistical data processing and mapping techniques, you

will establish a functional model able to predict the variation of acoustical outcomes from the covariation

of physiological parameters.

In a second step, a second experiment will be conducted in a more realistic and interactive situation

of face-to-face communication. You will explore how speakers modify their production of stop

consonants when they communicate in noisy or reverberant environments, and how they consequently

modify the coordination and the effort of their speech gestures, in comparison to casual speech.

Collaborations

The project will take place at GIPSA-lab in Grenoble, under the co-supervision of Maëva Garnier

(Expert in speech and cognition), Pascal Perrier (Expert in Speech motor control) and Franck Quaine

(Expert in biomechanics and EMG signals), in close relationship with the medical field (a dentist and a

maxillofacial surgeon).

The Master and Ph.D thesis will belong to a larger project, involving a second team that works on

laryngeal efforts (including a ENT specialist), and a third team working on the development of this

coordination in children (including speech therapists from Grenoble’s hospital). These two teams will

use the methodologies developed during the master thesis and the beginning of the Ph.D. thesis, and

will bring complementary information to the functional model of stop consonants production.

During the Ph.D. thesis, we envisage to send the Ph.D candidate for about 3 months in Italy for a

collaboration on high-density EMG matrices.

Skills: We are looking for an open-minded student with a main expertise in engineering

(biomechanics, signal processing and/or acoustics) but with a strong interest in human-related

questions (physiology, cognition, speech sciences). Programming skills (Matlab) will be appreciated, as

well as an experimental approach.

Indemnities: 400 per month during the 6 months of Master thesis.

~1400 per month during the 3 years of Ph.D fellowship.

Back  Top

6-22(2014-12-06) Stage recherche M2 : Modèles connexionnistes pour la génération automatique dans le cadre de l'interaction vocale, LIA, Avignon, France

Stage recherche M2 : Modèles connexionnistes pour la génération automatique dans le cadre de l'interaction vocale

Durée : 6 mois
Démarrage : février-mars 2015
Lieu : Laboratoire Informatique d'Avignon
Encadrants: Bassam Jabaian, Stéphane Huet et Fabrice Lefèvre

Description du stage :

Les systèmes d'interactions vocales utilisés dans des applications comme la réservation de billets d'avion ou d'hôtels, ou bien encore le dialogue avec un robot, font intervenir plusieurs composants. Parmi ceux-ci figurent le module de génération de texte qui produit la réponse du système en langue naturelle à partir d'une représentation sémantique interne créée par le gestionnaire de dialogue.

Les systèmes de dialogue actuels intègrent des modules de génération basés sur des règles écrites à la main à partir de patrons.
ex : confirm(type=$U, food=$W,drinks=dontcare) ? Let me confirm, you are looking for a $U serving $W food and any kind of drinks right ?

Ces modules gagneraient à se baser sur des méthodes d'apprentissage automatique afin de faciliter la portabilité des systèmes de dialogue vers d'autres tâches et améliorer la diversité des échanges générés. Parmi les méthodes d'apprentissage automatique, figurent les réseaux de neurones qui ont vu un regain d'intérêt depuis l'utilisation du « deep learning ». Ces réseaux de neurones ont déjà été employés par Google dans une tâche similaire de génération de description d'images (http://googleresearch.blogspot.fr/2014/11/a-picture-is-worth-thousand-coherent.html). L'objectif de ce stage est d'étudier l'utilisation de ces modèles dans le cadre de l'interaction vocale.

Si un intérêt pour l'apprentissage automatique et le traitement de la langue naturelle est souhaitable, il est attendu surtout du stagiaire de bonnes capacités en développement logiciel.

Pour candidater : envoyer un mail avec un CV et une lettre de motivation à bassam.jabaian@univ-avignon.fr

 
Back  Top

6-23(2014-12-06) 2 Post-doc positions at the Italian Institute of Technology.

New techniques for vision-assisted speech processing


BC: 69721 (draft)

BC: 69724 (draft)

Istituto Italiano di Tecnologia (http://www.iit.it) is a private Foundation with the objective of promoting Italy's technological development and higher education in science and technology. Research at IIT is carried out in highly innovative scientific fields with state-of-the-art technology.

iCub Facility (http://www.iit.it/icub) and Robotics, Brain and Cognitive Sciences (http://www.iit.it/rbcs) Departments are looking for 2 post-docs to be involved in the H2020 project EcoMode funded by the European Commission under the H2020-ICT-2014-1 call (topic ICT-22-2014 – Multimodal and natural computer interaction).

Job Description: Robust automatic speech recognition in realistic environments for human-robot interaction, where speech is noisy and distant, is still a challenging task. Vision can be used to increase speech recognition robustness by adding complementary speech-production related information. In this project visual information will be provided by an event-driven (ED) camera. ED vision sensors transmit information as soon as a change occurs in their visual field, achieving incredibly high temporal resolution, coupled with extremely low data rate and automatic segmentation of significant events. In an audio-visual speech recognition setting ED vision can not only provide new additional visual information to the speech recognizer, but also drive the temporal processing of speech by locating (in the temporal dimension) visual events related to speech production landmarks.

The goal of the proposed research is the exploitation of highly dynamical information from ED vision sensors for robust speech processing. The temporal information provided by EDC sensors will allow to experiment with new models of speech temporal dynamics based on events as opposed to the typical fixed-length segments (i.e. frames).

In this context, we are looking for 2 highly motivated Post-docs, respectively tackling vision (Research Challenge 1) and speech processing (Research Challenge 2), as outlined herebelow:

Research Challenge 1 (vision @ iCub facility – BC 69721): the post-doc will mostly work on the detection of features from event-driven cameras instrumental for improving speech recognition (e.g. lips closure, protrusion, shape, etc…). The temporal features extracted from the visual signal will be used for crossmodal event-driven speech segmentation that will drive the processing of speech. In the attempt to increase the robustness to acoustic noise and atypical speech, acoustic and visual features will be combined to recover phonetic gestures of the inner vocal tract (articulatory features).

Research Challenge 2 (speech processing @ RBCS – BC 69724): the post-doc will mainly develop a novel speech recognition system based on visual, acoustic and (recovered) articulatory features, that will be targeted for users with mild speech impairments. The temporal information provided by EDC sensors will allow to experiment with new strategies to model the temporal dynamics of normal and atypical speech. The main outcome of the project will be an audio-visual speech recognition system that robustly recognizes the most relevant commands (key phrases) delivered by users to devices in real-word usage scenarios.

The resulting methods for improving speech recognition will be exploited for the implementation of a tablet with robust speech processing. Given the automatic adaptation of the speech processing to the speech production rhythm, the speech recognition system will target speakers with mild speech impairments, specifically subjects with atypical speech flow and rhythm, typical of some disabilities and of the ageing population. The same approach will then be applied to the humanoid robot iCub to improve its interaction with humans in cooperative tasks.

Skills: We are looking for highly motivated people and inquisitive minds with the curiosity to use a new and challenging technology that requires a rethinking of visual and speech processing to achieve a high payoff in terms of speed, efficiency and robustness. The candidates we are looking for should also have the following additional skills:
  • PhD in Computer Science, Robotics, Engineering (or equivalent) with a background in machine learning, signal processing or related areas;
  • ability to analyze, improve and propose new algorithms;
  • Good knowledge of C, C++ programming languages with proven experience.
Team-work, PhD tutoring and general lab-related activities are expected.

An internationally competitive salary depending on experience will be offered.

Please note that these positions are pending the signature of the grant agreement with the European Commission (expected start date in early 2015)


How to apply:
Challenge 1: Send applications and informal enquires to jobpost.69721@icub.iit.it This e-mail address is being protected from spambots. You need JavaScript enabled to view it
Challenge 2: Send applications and informal enquires to jobpost.69724@rbcs.iit.it This e-mail address is being protected from spambots. You need JavaScript enabled to view it

The application should include a curriculum vitae listing all publications and pdf files of the most representative publications (maximum 2) If possible, please also indicate three independent reference persons.

Presumed Starting Date:
Challenge 1: January 2015 (but later starts are also possible).
Challenge 2: June 2015 (but later starts are also possible).
Evaluation of the candidates starts immediately and officially closes on November 10th, 2014, but will continue until the position is filled.

References:
Lichtsteiner, P., Posch, C., & Delbruck, T. (2008). A 128×128 120 dB 15 μs latency asynchronous temporal contrast vision sensor. Solid-State Circuits, IEEE Journal of, 43(2), 566-576.
Rea, F., Metta, G., & Bartolozzi, C. (2013). Event-driven visual attention for the humanoid robot iCub. Frontiers in neuroscience, 7.
Benosman, R.; Clercq, C.; Lagorce, X.; Sio-Hoi Ieng; Bartolozzi, C., 'Event-Based Visual Flow,' Neural Networks and Learning Systems, IEEE Transactions on , vol.25, no.2, pp.407,417, Feb. 2014, doi: 10.1109/TNNLS.2013.2273537
Potamianos, G. Neti, C. Gravier, G. Garg, A. and Senior, A.W. (2003)  “Recent Advances in the Automatic Recognition of Audiovisual Speech” in Proceedings of the IEEE Vol. 91 pp. 1306-1326 
Glass, J. (2003)“A probabilistic framework for segment-based speech recognition”, Computer Speech and Language, vol. 17, pp. 137-152.
Badino, L., Canevari, C., Fadiga, L., Metta, G. 'Deep-Level Acoustic-to-Articulatory Mapping for DBN-HMM Based Phone Recognition', in IEEE SLT 2012, Miami, Florida, 2012
Back  Top

6-24(2014-12-06) Post-Doc/Research position in speech intelligibility enhancement, Telecom Paris Tech,France

 

Post-Doc/Research position in speech intelligibility enhancement

Place: TELECOM ParisTech, Paris, France Duration: 1 year Start: Any date from January 1st, 2015

Salary: according to background and experience

*Position description*

The position is supported by the national project ANR-AIDA1 which is a 3 year project started in October 2013.

1 ANR-AIDA : « Automobile Intelligible pour Déficients Auditifs » - http://aida.wp.mines-telecom.fr/

The aim of the project is to provide new means to improve speech intelligibility of audio messages in the car environment, letting the user the possibility to tune the system to his/her own audition deficiency. In this context, the role of the PostDoc/researcher will consist in conducting forefront research in Speech processing and in directly participating to the project by building flexible software modules for speech intelligibility enhancement. The domain of speech enhancement mainly aims at suppressing the noise in noisy speech signals but does not in general significantly enhances speech intelligibility. In this project, we are mainly interested in enhancing intelligibility in noisy conditions taking into account specific hearing deficiencies. The developed approaches will then be at the crossroads of the domains of denoising and voice transformation.

*TELECOM ParisTech*

TELECOM ParisTech is the leading graduate School of Institut TELECOM with more than 160 research professors, 20 associated CNRS researchers and over 250 Engineering degrees, 50 PhD and 150 specialised masters (post graduates) awarded per year. The Audio, Acoustics and Waves research group (AAO) headed by Prof. Gaël Richard gathers 6 permanent staffs, 3 postDocs and over 12 PhD students. The group has developed a strong expertise in audio signal processing (source separation, indexing, Audio3D compression, dereverberation/denoising....). More information at http://www.tsi.telecom-paristech.fr/aao/en/

Candidate Profile

As minimum requirements, the candidate will have:

- A PhD in speech or audio signal processing, acoustics, machine learning, computer science, electrical engineering, or a related discipline.

- Some knowledge in speech signal processing.

- Good programming skills in Python or Matlab, C/C++.

The ideal candidate would also have:

- Solid knowledge of speech processing and voice transformation techniques.

- Ability to work in a multi-partner and national collaborative environment.

- Some knowledge of Max/MSP.

Contacts Interested applicants can contact Gaël Richard or Bertrand David for more information or directly email a candidacy letter including Curriculum Vitae, a list of publications and a statement of research interests.

- Gael Richard (Gael.Richard@telecom-Paristech.fr) ; +33 1 45 81 73 65

- Bertrand David (firstname.lastname@Telecom-ParisTech.fr).

Back  Top

6-25(2014-12-06) PhD Position in Second Language Speech Segmentation at the University of Kansas

PhD Position in Second Language Speech Segmentation at the University of Kansas

 

The Second-Language Processing and Eye-Tracking (L2PET) lab, directed by Dr. Annie Tremblay, in the Department of Linguistics at the University of Kansas, invites applications for a Ph.D. position in second-language speech segmentation. This Ph.D. position includes 3 years of graduate research assistantship funded by an NSF research grant awarded to Dr. Tremblay and 2 years of graduate teaching assistantship provided by the Department of Linguistics.

 

Research in the L2PET lab focuses on second-language speech processing and spoken-word recognition using behavioral methods such as eye tracking, word monitoring, priming, and artificial language speech segmentation (for details, see https://sites.google.com/site/l2petlab/home). The NSF-funded research investigates the influence of the native language and of recent linguistic exposure on adult second language learners? use of prosodic cues, specifically pitch, in speech segmentation. Its aims are to determine how the similarities and differences between the native language and second language affect adult second language learners? ability to use prosodic cues in speech segmentation and whether the speech processing system is sufficiently adaptive to develop sensitivity to new segmentation cues (for details, see http://www.nsf.gov/awardsearch/showAward?AWD_ID=1423905&HistoricalAwards=false). This project is done in collaboration with scholars from The Netherlands, France, and South Korea.

 

Applicants should have a strong background in Linguistics, Psychology, Cognitive Science, or Speech and Hearing Sciences. Knowledge of Korean and experience with experimental research are strongly preferred. Questions about this position should be directed to Dr. Tremblay (atrembla@ku.edu). Applications should be submitted through the University of Kansas application system (https://linguistics.ku.edu/admission#tab2name). More information about admission requirements can be found at https://linguistics.ku.edu/admission. The deadline for applications is January 1st, 2015. Start date for this Ph.D. position is August 24th, 2015.

 

Linguistics at the University of Kansas

 

The Department of Linguistics at the University of Kansas has undergone significant changes in the past decade to position itself as a unique program that unites linguistic theory and experimental research. The department has particular strengths in experimental phonetics and phonology, first and second language acquisition, developmental psycholinguistics, second language psycholinguistics and neurolinguistics, the cognitive neuroscience of language, linguistic fieldwork, and theoretical syntax and semantics. Faculty members and graduate students study a broad range of languages including understudied language varieties in Asia, Africa, and the Americas. The department has six active research labs, which have all successfully competed for external funding and provide support for graduate studies. The department has both head-mounted and remote eye trackers, an EEG laboratory, and on the KU medical center campus, cortical MEG, fetal MEG, and MRI systems.




Back  Top

6-26(2014-12-05) Stage de Master2 à Nancy, France

Intelligibilité de la parole : quelle mesure pour déterminer
-------------------------------------------------------------
le degré de nuisance
--------------------

  Encadrants  Irina Illina (LORIA)
              Patrick Chevret (INRS)

  Adresse
    LORIA, Campus Scientifique - BP 239, 54506 Vandoeuvre-lès-Nancy}
  Tél : 03 54 95 84 9003 54 95 84 90
Bureau  C147 au LORIA
 Email illina@loria.fr

   INRS, Avenue de Bourgogne, 54500 Vandoeuvre-lès-Nancy
  Tél : 03 83 50 20 0003 83 50 20 00
  Email  patrick.chevret@inrs.fr
 
Motivations
-------------
L'intelligibilité de la parole signifie la capacité
d'une conversation à être comprise par un auditeur situé à proximité.
Le niveau d'intelligibilité de la parole dépend de plusieurs critères :
le niveau de bruit ambiant, l'absorption éventuelle d'une partie du
spectre sonore, les déformations acoustiques, les échos, etc.
L'intelligibilité de la parole est utilisée pour évaluer les
performances des systèmes de
télécommunication, de sonorisation de salles, ou encore des personnes.

L'intelligibilité de la parole peux être évaluée :
- de façon subjective : les auditeurs écoutent plusieurs mots ou phrases
et répondent aux différentes questions (la transcription des sons,
le pourcentage de consonnes perçues, etc.). Les scores obtenus constituent
la valeur d'intelligibilité ;
- de façon objective, sans faire appel aux auditeurs, en utilisant les
mesures acoustiques : l'indice d'intelligibilité de la parole
(Speech Transmission Index, STI),  
et le niveau d'interférence avec la parole.

Les mesures subjectives sont dépendantes des auditeurs et
demandent un nombre important d'auditeurs.
Cela est difficile à réaliser, surtout quand il y a différents types 
d'environnements et qu'il est nécessaire d'évaluer cette mesure pour chacun.
Les mesures objectives ont l'avantage d'être quantifiables automatiquement
et de façon précise.  En revanche, jusqu'à quel point les mesures
objectives permettent de mesurer la nuisance de l'environnement sur
l'intelligibilité de la parole et sur la santé des gens,
reste un problème ouvert.
Par exemple, l'indice STI consiste à mesurer la modulation de l'énergie.
Mais la modulation d'énergie peut être produite par les machines, pourtant
cela ne correspond pas à la parole.

Sujet
---------
Dans le cadre de ce stage, nous nous intéressons à l'étude de différentes
mesures objectives d'intelligibilité de la parole. Le but est de trouver
des mesures fiables pour évaluer le niveau de nuisance de l'environnement
ambiant sur la compréhension de la parole, la santé mentale des personnes à
long terme et leur productivité. Quelques pistes possibles :
faire corréler la mesure de confiance de mots, la mesure de confiance de bruit
et les mesures subjectives d'intelligibilité de la parole. Pour élaborer
ces mesures, le
système de reconnaissance automatique de la parole sera utilisé.

Ce stage sera effectué dans le cadre de la collaboration entre notre
équipe Multispeech et l'INRS (Institut National de Recherche et de
Sécurité).  L'INRS travaille sur l'identification des risques
professionnels, l'analyse de leurs conséquences sur la santé et leur
prévention.  INRS dispose d'un riche corpus d'enregistrements et de
mesures subjectives d'intelligibilité de la parole. Ce corpus sera
utilisé dans le cadre de ce stage. Notre équipe Multispeech possède
une grande expertise en traitement du signal et a développé plusieurs
méthodologies d'estimation de bruits et du niveau sonore. L'équipe
Multispeech a développé le système complet de reconnaissance
automatique de la parole.
                  
Compétences demandées
---------------------
Avoir de bonnes bases en statistique et en traitement du signal, 
maîtriser le langage de
programmation C, la programmation orientée objet et le Perl.


 

Back  Top

6-27(2014-12-10) MIT Postdoctoral Associate Opportunity,Cambridge, MA, USA
MIT Postdoctoral Associate Opportunity The Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory is seeking a Postdoctoral Associate to participate in research and development of deep learning methods applied to the problems of multilingual speech recognition, dialect, and speaker recognition. This position is expected to start in early 2015 for a one-year period, with the possibility of extension. A Ph.D. in Computer Science or Electrical Engineering is required. Individuals must have at least four years of hands-on, computer-based experience in algorithm and system development in speech processing related to speech recognition, speaker or language recognition, and must have strong programming skills (especially, but not limited to C++, and Python) and familiarity with Linux environment. This individual must be able to work both independently, and cooperatively with others, and have good communication and writing skills. Interested candidates should send their CVs to Jim Glass, glass@mit.edu 
Back  Top

6-28(2014-12-08) INGENIEUR EN TECHNIQUES EXPERIMENTALES, Aix en Provence, France

Proposition de poste à la mutation :

Poste d?INGENIEUR EN TECHNIQUES EXPERIMENTALES

Maison de la Recherche ? Aix en Provence - UFR ALLSH (Arts, Lettres, Langues et Sciences Humaines)

http://www.paca.biep.fonction-publique.gouv.fr/common/jobSearch/showList

 

MISSIONS ET ACTIVITES PRINCIPALES

:

?Recueil et traitement de données audio-visuelles, comportementales et physiologiques.

?Programmation, coordination et réalisation d'

études (expérimentales, qualitatives,

quantitatives, en ligne, observationnelles...).

?Traitement de données expérimentales, quantitatives et qualitatives.

?Formation à la technique et à l'utilisation des dispositifs expérimentaux ; conseil aux

utilisateurs

pour leur mise en ?uvre dans le respect des normes d'utilisation.

?Veille technologique.

?Coordination des relations aux interfaces, organisation des échanges d'informations avec

les spécialistes des domaines techniques mobilisés dans l'expérience.

?Rédaction

des documents de spécifications techniques, de conception et de réalisation des

manuels utilisateurs associés aux dispositifs.

?Organisation et contrôle des interventions de maintenance préventive et des interventions

de dépannage

 

 

 

 

 

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy
Back  Top

6-29(2014-12-12) 2 POSTDOC positions in Machine Learning for Machine translation, LIMSI, Orsay, France

2 POSTDOC positions in Machine Learning for Machine translation
at LIMSI-CNRS, Orsay (Paris area), FRANCE

Two postdoctoral positions are available at LIMSI-CNRS. The
positions are all one year, with possibilities of extension.  We are
seeking researchers with interests in machine learning applied to
statistical machine translation, automatic speech recognition or
computational linguistics.

Topics of interest include, but are not limited to:
- Bayesian models for natural language processing
- Multilingual topic models
- Word Sense Disambiguation
- Statistical Language Modeling
- Speech translation

Candidates must have (almost) completed a Ph.D. in machine learning or
natural language/speech processing and have a strong track record of
successful implementation and publication in natural language processing or
machine learning.

Please send your CV and/or further inquiries to François Yvon
(yvon@limsi.fr).

Duration: 24 or 36 months, starting early 2015.

Application deadline: Open until filled

The successful candidates will join a dynamic research team working on
various aspects of Statistical Machine Translation and Speech
Processing. For information regarding our activities, see
http://www.limsi.fr/Scientifique/tlp/mt/

About the LIMSI-CNRS:
The LIMSI-CNRS lab is situated at Orsay, a green area 25 km south of
Paris. A suburban train connects Orsay to Paris city center. Detailed
information about the LIMSI lab can be found at http://www.limsi.fr

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy  
Back  Top

6-30(2014-12-12) Internship at Orange Labs, Lannion, France

Stage : Exploration par mots clés de collections de documents multimedia

 

Contexte général :

Analyse automatique de la composante audio de contenus audiovisuels afin de proposer de nouvelles formes d?exploration dans ces contenus.

 

Missions du stage :

L?extraction non supervisée de mots-clés à partir d?une collection de contenus permet de faire ressortir les mots (ou expressions) pertinentes dans un contenu, relativement aux autres contenus de la collection. Des métriques de relations statistiques permettent en outre de mesurer un degré d?association entre deux mots dans la collection.

En s?appuyant sur un module existant permettant de réaliser ces deux processus d?extraction et de mise en relation de mots clés, l?objectif du stage est :

  • d?étudier un processus d?adaptation permettant de recalculer efficacement les statistiques d?une collection lorsqu?un nouveau contenu est ajouté à la collection
  • d?étudier dans quelle mesure l?exploitation d?un chapitrage peut permettre une analyse plus fine et une extraction de mots clés pertinents dans un chapitre d?une collection.
  • de proposer et comparer des méthodes d?évaluation objective des résultats obtenus

 

Equipe d?accueil :

L?équipe CONTENT/FAST d?Orange Labs a en charge des travaux de recherche et développement dans le domaine du Traitement Automatique des Langues (TAL) appliquée aux documents écrits et au texte issu du vocal (analyse sémantique, extraction d?information, requêtes en langage naturel, etc.), et dans le domaine des Bases de Connaissances.

 

Profil :

- Master 2 en Traitement Automatique des Langues ou Informatique

 

Contact : Géraldine Damnati (geraldine.damnati@orange.com)

 

Stage rémunéré, à Lannion, d?une durée de 5 mois à partir de février ou mars 2015.

 

Rémunération : entre 1300? et 1700? brut mensuel en fonction du profil du candidat.

 

pour visualiser l?offre sur Orange Jobs : http://orange.jobs/jobs/offer.do?do=fiche&id=43874

 

-------------------------

Géraldine DAMNATI

Orange/IMT/OLPS/OPENSERV/CONTENT/FAST

2 av. Pierre Marzin, 22307 Lannion

+33 2 96 05 13 88+33 2 96 05 13 88

Back  Top

6-31(2014-12-16) Intersship at Orange Lab, France

 

Internship opportunity at Orange Labs

Incomplete requests management in human/machine dialogue.

Entity: Orange Labs.

Department/Team: CRM&DA/NADIA.

Duration: 6 months.

Contact: Hatim KHOUZAIMI (hatim.khouzaimi@orange.com)

About our team:

Orange Labs is the Research and Development division of Orange, the leading telecommunication company in France. The mission of the CRM&DA department (Customer Relationship Management & Data Analytics) is to invent new solutions to improve the company’s interactions with its customers by using data analysis techniques. You will be part of NADIA (Natural DIAlogue interaction), which is one of the teams composing CRM&DA and whose mission is to develop and maintain a human/machine dialogue solution, which is already widely used by customers.

Your mission:

Thanks to the recent improvements in the Automatic Speech Recognition (ASR) technology, research in the field of Spoken Dialogue Systems (SDSs) has been very active during the late few years. The main challenge is to design user initiative dialogue strategies where the user can use natural language to utter complex requests, with a lot of information, as opposed to system initiative ones, where the request is entered chunk by chunk. However, due to the user’s unfamiliarity with the system and the noise induced by the ASR module, the request captured by the system is often incomplete, hence rejected. The objective of this internship is to figure out solutions to detect whether a request is incomplete and not incorrect and if it is the case, to extract partial information. This will be later used by the Dialogue Manager module to ask the user to add missing information.

In addition, researchers in the field of SDSs are more and more interested in improving the system’s floor management capacities. Instead of adopting a walkie-talkie approach where each of the dialogue participants has to wait for the other to release the floor before processing his utterance and coming up with a response, incremental dialogue suggests that the listener processes the speaker’s utterance on the flow, hence being able to interrupt her. In this frame, the system processes growing partial requests, which is another application of the solutions that will be studied. Incremental dialogue capacities are crucial in the development of a new generation of dialogue systems, which are more human-like, more reactive and less error-prone.

Essential functions:

You will improve the current dialogue solution that is developed and maintained by our

team. For that, you will have to interact with researchers in the field as well as developers.

According to the quality of the solutions that will be proposed, your results can be published in

scientific conferences or lead to a patent.

Qualifications and skills:

- MSc in Computer Science or a related field.

- A specialisation in Natural Language Processing is very welcome.

- Object-Oriented Programming.

- Good background in applied mathematics: probability and statistics.

- Good English level.

- Interest in Human Machine Interaction and Artificial Intelligence.

- Team work.

If you want to be part of an innovative experience in a team of talented people with state of art

skills in the field, please submit your resume by email to hatim.khouzaimi@orange.com.

Back  Top

6-32(2015-01-05) Internship at LORIA, Nancy, France
Intelligibilité de la parole : quelle mesure pour déterminer

------------------------------------------------------------------------

le degré de nuisance
---------------------------

 

Encadrants  Irina Illina (LORIA)
                    Patrick Chevret (INRS)

 

  Adresse
    LORIA, Campus Scientifique - BP 239, 54506 Vandoeuvre-lès-Nancy}
  Tél : 03 54 95 84 9003 54 95 84 90
Bureau  C147 au LORIA
 Email illina@loria.fr

   INRS, Avenue de Bourgogne, 54500 Vandoeuvre-lès-Nancy
  Tél : 03 83 50 20 0003 83 50 20 00
  Email  patrick.chevret@inrs.fr

 
Motivations
-----------------
L'intelligibilité de la parole signifie la capacité
d'une conversation à être comprise par un auditeur situé à proximité.
Le niveau d'intelligibilité de la parole dépend de plusieurs critères :
le niveau de bruit ambiant, l'absorption éventuelle d'une partie du
spectre sonore, les déformations acoustiques, les échos, etc.
L'intelligibilité de la parole est utilisée pour évaluer les
performances des systèmes de
télécommunication, de sonorisation de salles, ou encore des personnes.

L'intelligibilité de la parole peux être évaluée :
- de façon subjective : les auditeurs écoutent plusieurs mots ou phrases
et répondent aux différentes questions (la transcription des sons,
le pourcentage de consonnes perçues, etc.). Les scores obtenus constituent
la valeur d'intelligibilité ;
- de façon objective, sans faire appel aux auditeurs, en utilisant les
mesures acoustiques : l'indice d'intelligibilité de la parole
(Speech Transmission Index, STI),  
et le niveau d'interférence avec la parole.

Les mesures subjectives sont dépendantes des auditeurs et
demandent un nombre important d'auditeurs.
Cela est difficile à réaliser, surtout quand il y a différents types 
d'environnements et qu'il est nécessaire d'évaluer cette mesure pour chacun.
Les mesures objectives ont l'avantage d'être quantifiables automatiquement
et de façon précise.  En revanche, jusqu'à quel point les mesures
objectives permettent de mesurer la nuisance de l'environnement sur
l'intelligibilité de la parole et sur la santé des gens,
reste un problème ouvert.
Par exemple, l'indice STI consiste à mesurer la modulation de l'énergie.
Mais la modulation d'énergie peut être produite par les machines, pourtant
cela ne correspond pas à la parole.

 

Sujet
---------
Dans le cadre de ce stage, nous nous intéressons à l'étude de différentes
mesures objectives d'intelligibilité de la parole. Le but est de trouver
des mesures fiables pour évaluer le niveau de nuisance de l'environnement
ambiant sur la compréhension de la parole, la santé mentale des personnes à
long terme et leur productivité. Quelques pistes possibles :
faire corréler la mesure de confiance de mots, la mesure de confiance de bruit
et les mesures subjectives d'intelligibilité de la parole. Pour élaborer
ces mesures, le
système de reconnaissance automatique de la parole sera utilisé.

Ce stage sera effectué dans le cadre de la collaboration entre notre
équipe Multispeech et l'INRS (Institut National de Recherche et de
Sécurité).  L'INRS travaille sur l'identification des risques
professionnels, l'analyse de leurs conséquences sur la santé et leur
prévention.  INRS dispose d'un riche corpus d'enregistrements et de
mesures subjectives d'intelligibilité de la parole. Ce corpus sera
utilisé dans le cadre de ce stage. Notre équipe Multispeech possède
une grande expertise en traitement du signal et a développé plusieurs
méthodologies d'estimation de bruits et du niveau sonore. L'équipe
Multispeech a développé le système complet de reconnaissance
automatique de la parole.
                  
Compétences demandées
---------------------
Avoir de bonnes bases en statistique et en traitement du signal, 
maîtriser le langage de
programmation C, la programmation orientée objet et le Perl.

Back  Top

6-33(2015-01-05) Proposition de post-doc au LORIA (Nancy, France)
Proposition de post-doc au LORIA (Nancy, France)
Reconnaissance automatique de la parole : contextualisation du modèle de langage par ajustement dynamique
Cadre du projet ANR ContNomina
Les technologies impliquées dans la recherche d?informations dans de grandes bases de données audio/vidéo reposent le plus souvent sur l'analyse de grands corpus fermés et sur des techniques d'apprentissage automatique et de modélisation statistique du langage écrit ou oral. L'efficacité de ces approches est maintenant unanimement reconnue mais elles présentent néanmoins des défauts majeurs, en particulier pour la prise en charge des noms propres, qui sont cruciales pour l'interprétation des contenus.
 Dans le cadre des données diachroniques (qui évoluent dans le temps) de nouveaux noms propres apparaissent continuellement ce qui nécessite de gérer dynamiquement les lexiques et modèles de langage utilisés par le système de reconnaissance de la parole.
 En conséquence, le projet ANR ContNomina (2013-2017) se concentre sur le problème des noms propres dans les systèmes de traitement automatique des contenus audio en exploitant au mieux le contexte des documents traités. Pour ce faire, le sujet de ce post-doc se focalisera sur la contextualisation de la reconnaissance à travers l?ajustement dynamique du modèle de langage de manière à le rendre plus précis.
Sujet du post-doc
On dispose du modèle de langage du système reconnaissance (n gram appris d?un grand corpus de texte). Le problème est d?estimer la probabilité d?un mot ajouté en fonction de son contexte. Plusieurs pistes pourront être explorées: adapter le modèle de langage, utiliser un modèle de classe ou étudier la notion d?analogie.
Notre équipe a développé un système complet de reconnaissance automatique de la parole permettant de transcrire une émission de radio à partir du fichier audio correspondant. Le post-doctorant devra développer un nouveau module dont la fonction est d?intégrer de nouveaux noms propres dans le modèle de langage.
Compétences demandées
Avoir obtenu une thèse en TAL (Traitement Automatique des Langues), être familier avec les outils de reconnaissance automatique de la parole, avoir de bonnes bases en statistiques et maîtriser les langages de programmation C et Perl.
Durée
Environ 12 mois, début durant 2014 (la date de début est flexible)
Localisation et contact
Laboratoire Loria, équipe Parole, Nancy, France
Envoyer par mail un CV détaillé avec une liste de publications, diplômes et une lettre de motivations


 
Post-doc position at LORIA (Nancy, France)
 
Automatic speech recognition: contextualisation of the language model by dynamic adjustment
 
Framework of ANR project ContNomina
 
The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.
 
In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.
 
As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the postdoc student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.
 
Post-doc subject
 
The language model of the recognition system (n gram learned from a large corpus of text) is available. The problem is to estimate the probability of a new proper name depending on its context. Several tracks will be explored: adapting the language model, using a class model or studying the notion of analogy.
 
Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The postdoc will develop a new module whose function is to integrate new proper names in the language model.
 
Required skills
 
A PhD in NLP (Natural Language Processing), be familiar with the tools for automatic speech recognition, background in statistics and computer program skills (C and Perl).
 
Post-doc duration
 
12 months, start during 2014 (these is some flexibility)
 
Localization and contacts
 
Loria laboratory, Speech team, Nancy, France
 
 
Candidates should email a letter of application, a detailed CV with a list of publications and diploma


 
Back  Top

6-34(2015-01-05) Internship at GIPSA, Grenoble, France

Le GIPSA-lab à Grenoble propose un stage intitulé:
'Acquisition et modélisation des mouvements orofaciaux en parole et en déglutition basé sur l'articulographie électromagnétique 3D'
('Acquisition and modeling of orofacial movements in speech and swallowing based on 3D ElectroMagnetic Articulography')
http://www.gipsa-lab.grenoble-inp.fr/transfert/propositions/3_2014-10-20_SujetM2_EMA-Deglutition-2014-20151.3.pdf
Contact: Pierre Badin (Pierre.Badin@gipsa-lab.grenoble-inp.fr)

Back  Top

6-35(2015-01-08) Senior Research Engineer at Dolby, Beijing, China

Senior Research Engineer

(Multimedia System and Algorithm Research)

 

Overview

 

Join the leader in entertainment innovation and help us design the future. At Dolby, science meets art, and high tech means more than computer code. As a member of the Dolby team, you’ll see and hear the results of your work everywhere, from movie theaters to smartphones. We continue to revolutionize how people create, deliver, and enjoy entertainment worldwide. To do that, we need the absolute best talent, including insatiably curious engineers and scientists for our advanced technology group. We’re big enough to give you all the resources you need, and small enough so you can make a real difference and earn recognition for your work. We offer a collegial culture, challenging projects, and excellent compensation and benefits.

 

This position is in the Research Organization of Dolby Laboratories (www.dolby.com) and is located in Beijing, China. The position focuses on the research and development of cutting edge multimedia systems and algorithms with a focus on audio. These systems are researched and designed to enable the next generation of media playback on smart phones, tablets, PCs, televisions; they will provide the technologies powering both next generation streaming technologies and cinema systems.

The Beijing Research team applies and uses a broad range of disciplines including audio and speech technologies, machine learning, information theory, computer science, and applied mathematics. The research work is performed in small teams consisting of researchers with different backgrounds. Most project involve global cooperation with Dolby’s other Research teams in North America, Europe, and Australia.

 

The research engineer position focuses on the research and development of core technologies laying the foundation for Dolby’s future commercial success. It requires a strong analytic mindset, an ability to quickly learn new, highly innovative multimedia technologies, and outstanding programming skills allowing to rapidly implements theoretical concepts into research prototypes. As a part of an international team, the research engineer will work on ideas exploring new horizons in multimedia technologies and delivery systems.

 

Your possible background

 

We are looking for a self-motivated, highly talented and highly accomplished individual that has demonstrated his/her ability to perform innovative research and is interested in applying his/her computer science skills to research novel and cutting-edge multimedia technologies. This position does not require an in-depth understanding of audio technologies and signal processing. The ideal candidate has done some multimedia related in academic or industrial research environment and has a strong interest to learn more about multimedia. The position requires advanced knowledge of theory and applications of techniques from theoretical computer science, search algorithms, and world-class programming skills. Very solid mathematical skills are a must.

Education, Skills, Abilities, and Experience:

  • MsC/Ph.D. Computer Science, industrial experience desirable
  • Outstanding programming skills: C/C++, Python, Matlab, Perl,
  • Good understanding of computer theoretical computer science
  • Solid background in applied mathematics and stochastic, experience in

            numerical modeling of complex systems

  • Internet and streaming applications expertise desirable
  • System design experience
  • Understanding of signal processing
  • Strong team-oriented work ethic.
  • Strong personal interest in learning, researching, and creating new multimedia

            technologies with high commercial impact

  • Independent, self-motivated worker requiring minimal supervision
  • Fluent in Chinese and English. Excellent communication skills

 

 

Strongly Desired

 

  • Experience working in a software development team, including software version

            control

  • Good background in network programming
  • Proficient in Web programming and GUI design
  • Real-time windows programming
  • Personal interest in audio and video

 

Back  Top

6-36(2015-01-12) Cambridge University (UK) : Research Associate in Open Domain Statistical Spoken Dialogue Systems
Cambridge University : Research Associate in Open Domain Statistical Spoken Dialogue Systems
 

Applications are invited for a Research Assistant/Associate position in statistical spoken dialogue systems in the Dialogue Systems Group at the Cambridge University Engineering Department. The position is sponsored by a 3 Year EPSRC Research Grant Award.

The main focus of the work will be on the development of techniques and algorithms for implementing robust statistical dialogue systems which can support conversations ranging over very wide, potentially open, domains. The work will extend existing techniques for belief tracking and decision making by distributing classifiers and policies over the corresponding ontology.

The successful candidate will have good mathematical skills and be familiar with modern machine learning techniques. Preference will be given to candidates with specific understanding of Bayesian methods, deep learning and reinforcement learning and experience in spoken dialogue systems or a related technology. Good communication and writing skills are essential and knowledge of semantic web technologies would be an advantage. Candidates will either have (or shortly obtain) a PhD in a relevant subject, or will have gained comparable research experience.

This is an exciting opportunity to join one of the leading groups in statistical speech and language processing. Cambridge provides excellent research facilities and there are extensive opportunities for collaboration, visits and attending conferences.

Salary Ranges: Research Assistant: £24,775 - £27,864   Research Associate: £28,695 - £37,394

 

For details and application information see: http://www.jobs.cam.ac.uk/job/5925/   or contact Steve Young at sjy@eng.cam.ac.uk

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA