ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #181  »  Jobs

ISCApad #181

Wednesday, July 10, 2013 by Chris Wellekens

6 Jobs
6-1(2013-01-12) Stage de Master à IRIT Toulouse F

Nous sommes à la recherche d'une personne pour un stage de Master 2 recherche qui sera suivi d'un financement CIFRE de 3 ans avec AIRBUS. Merci de me contacter en m'envoyant un CV et une lettre de motivation le plus rapidement possible.

Contact
~~~~~~
Jérôme Farinas
Equipe SAMOVA
Institut de Recherche en Informatique de Toulouse
Tél : 05 61 55 74 34
Mèl : jfarinas@irit.fr

Mots clés
~~~~~~~
Parole spontanée, son, bruit, audio, reconnaissance, transcription, apprentissage.

Contexte de l’étude
~~~~~~~~~~~~~~~
Lors d’un vol, tous les paramètres sont enregistrés dans deux calculateurs distincts, le DFDR (Digital Flight Data Recorder) et le CVR (Cockpit Voice Recorder). Le DFDR enregistre les paramètres techniques du vol. Le CVR enregistre toutes les conversations entre l’équipage, le personnel de cabine, les centres au sol (navigation aérienne, compagnie…). Sont également enregistrés toutes les alarmes qui peuvent survenir à bord ainsi que tous les bruits qui peuvent être entendus dans le poste de pilotage.

Intérêt de l’étude
~~~~~~~~~~~~~
Au sein du département avionique, le contenu du CVR, suite à un vol d’essais et plus particulièrement ceux liés à une certification, est analysé afin de corréler tous les évènements tracés par les pilotes et ingénieurs navigants avec le contenu de l’enregistrement. Cette analyse a aussi pour but d’identifier et de caractériser tous les évènements sonores imprévus. L’analyse et la transcription sont faites par une simple écoute.
Afin d’améliorer la fiabilité, la pertinence, l’exhaustivité et la répétitivité des analyses effectuées, l’intérêt de l’étude est de proposer des algorithmes capables d’extraire du bruit ambiant du poste de pilotage, de la voix, des sons synthétiques et des bruits caractéristiques afin de les transcrire automatiquement.

Principaux objectifs de l’étude
~~~~~~~~~~~~~~~~~~~~~~~~
Ce stage adresse à la fois des problématiques liées à la reconnaissance de sons prédéfinis (existence d’une référence audio), la détection de bruits spécifiques et la transcription de la parole spontanée.
Ce sujet de stage est lié à une thèse qui sera financée avec un financement CIFRE avec AIRBUS. La thèse est divisée en 3 parties correspondant à l’étude de chacune des trois catégories de sons explorées ; les approches seront de fait différentes :
1. Pour la reconnaissance des sons (alarmes, code morse…), il est possible de définir des prototypes ou références. C’est pourquoi l’étude s’orientera vers l’emploi d’une part d’algorithmes de détection de fréquences caractéristiques en prenant en compte les contraintes d’un environnement hétérogène (bruit, recouvrement…) et d’autre part de méthodes de reconnaissance des formes classiques appliqués à l’audio tels que ceux employés en reconnaissance de motifs sonores [13].
2. La détection des bruits représentatifs (régime moteur, train d’atterrissage…) ou inattendus (usure anormale, parasite…), passera par une analyse des signatures
acoustiques caractéristiques pour en déduire une modélisation. Cette détection nécessitera une phase d’apprentissage.
Pour ces deux premières parties, le titulaire s’appuiera sur les résultats d’un stage de fin de cycle d’ingénieur pour lequel un prototype a déjà été développé. Il s’agira
d’enrichir la démarche scientifique, puis de compléter et de confirmer les choix techniques qui ont été proposés.
3. La reconnaissance de la parole qui est de loin la plus importante, s’orientera d’abord vers une étude de faisabilité. Les deux principales difficultés sont liées à la production même de la parole qui est quasi « spontanée » et à l’environnement dans lequel la prise de son est faite.
Dans un premier temps, le titulaire devra effectuer une analyse très précise de l’environnement, bien plus contraignant que les environnements « conventionnels »
dans lesquels sont développés la majorité des systèmes de reconnaissance de la parole (parole téléphonique, journaux d'information en anglais [1], français [2],
sessions du parlement européen [3]). Il existe des études sur l'influence de la dégradation due aux environnements bruités : dans le secteur de la construction [4],
dans le secteur aéronautique [5]. Les travaux dans ce dernier secteur ont principalement pour objectif depuis les années 1980 de réaliser une simple
commande vocale dans les cockpits. L'analyse de la parole à partir d'enregistrements CVR n’a jamais fait l'objet d'étude, seules des recherches sur les sons d'alerte ont été menées [6,7].
Dans un second temps et une fois l’analyse effectuée, le titulaire s’attachera à définir les spécifications du système de reconnaissance automatique, tout en respectant les contraintes suivantes :
- Contraintes liées à un environnement fortement bruité et altéré par des événements sonores liés à l’exploitation de l’avion (recouvrement).
- Contraintes liées à une population multiculturelle : langue (capacité à changer de langue pour un même locuteur), accent, vocabulaire.
- Contraintes liées aux conditions dans lesquelles le locuteur opère : augmentation du débit, stress, fatigue.
Les principales pistes de recherche envisagées se déclinent selon deux axes :
- La compensation au niveau de la paramétrisation : une première analyse des différents bruits des enregistrements CVR permettra de cibler les traitements pour lutter contre le bruit existant (soustraction cepstrale, normalisation de la variance, filtrage ARMA, filtres RASTA...). Un deuxième axe consiste à s’appuyer sur des résultats récents en compensation de bruit dans le domaine cepstral, domaine de paramétrisation le plus performant en reconnaissance de parole. Il s’agit de décomposer l’espace de représentation en une composante utile pour la reconnaissance et une composante dite de nuisance qui rend compte de la variabilité de la session d’enregistrement [14]. Dans le cas présent et le cadre particulier des cockpits, cette variabilité représenterait l’environnement bruité, mais elle pourrait aussi rendre compte des conditions de stress du locuteur.
- L’adaptation au niveau des modèles acoustiques et linguistiques : les modélisations de la parole devront être adaptées aux différentes formes de parole présentes dans les enregistrements. Il s’agira de regarder plus précisément les vocabulaires employés pour les enrichir éventuellement, et tenir compte de la spontanéité au travers des modèles de langage (le traitement de la spontanéité reste à ce jour un défi très important à relever). Les techniques d'adaptation des lois des modèles acoustiques par Maximum Likelihood Linear Regression (MLLR) [8] et Maximum A Posteriori (MAP) [9] et les techniques d’adaptation au niveau de la modélisation elle-même (analyse factorielle appliquée aux modèles de Markov cachés [10,11]) seront les points de départs. Les adaptations des lexiques et des modèles de langage devront être étendues à ce type de dialogues [12].
Durant le stage, les parties 1 et 2 devront donner lieu à un prototype utilisable. La partie 3 pourra être concrétisé par la mise en place d’un système de reconnaissance de la parole de base.

Top

6-2(2013-01-14) Ph.D. Researcher in Speech Synthesis, Trinity College, Dublin, Ireland

Post Specification

Post Title:

Ph.D. Researcher in Speech Synthesis

Post Status:

3 years

Department/Faculty:

Centre for Language and Communication Studies (CLCS)

Location:

Phonetics and Speech Laboratory

Salary:

€16,000 per annum (plus fees paid)

Closing Date:

31st January 2013

Post Summary

A Ph.D. Researcher is required to work in the area of speech synthesis at the Phonetics and

Speech Laboratory, School of Linguistic, Speech and Communication Sciences. The position

will involve carrying out research on the topic of Hidden Markov Model (HMM)-based speech

synthesis. Specifically, we are looking for a researcher to work on developing a source-filter

based acoustic modelling for HMM-based speech synthesis which is closely related to the

human speech production process and which can facilitate modification of voice source and

vocal tract filter components at synthesis time.

Background to the Post

Much of the research carried out to date in the Phonetics and Speech Laboratory has been

concerned with the role of the voice source in speech. This research involves the development of accurate voice source processing both as a window on human speech production and for exploitation in voice-sensitive technology, particularly synthesis. The laboratory team is interdisciplinary and includes engineers, linguists, phoneticians and technologists.

This post will the main be funded by the on-going Abair project which has developed the first

speech synthesisers for Irish (www..abair.ie), and the researcher will exploit the current Abair

synthesis platform. In this project the aim is to deliver multi-dialect synthesis with multiple

personages and voices that can be made appropriate to different contexts of use. The post will also be linked to the FastNet project which aims at voice-sensitive speech technologies.

A specific goal of our laboratory team is to leverage our expertise on the voice by improving the naturalness of parametric speech synthesis, as well as making more flexible synthesis platforms which can allow modifications of voice characteristics (e.g., for creating different personalities/characters, different forms of expression etc).

Standard duties of the Post

Initially the researcher will be required to attend some lectures as part of the Masters

programme on Speech and Language Processing. This and a supervised reading

programme will provide a background in the area of voice production, analysis and

synthesis.

* In the very early stages the researcher will be required to develop synthetic voices, using

the Irish corpora, with the standard HMM-based synthesis platform (i.e. HTS). Note that

to work with the Irish corpora does not require a background in the Irish language, as

there will be collaboration with experts in this field.

* The researcher will be required to familiarise themselves with existing speech synthesis

platforms which provide explicit modelling of the voice source (e.g., Cabral et al. 2011,

Raitio et al. 2011, Anumanchipalli et al. 2010).

* The researcher will then need to first implement similar versions of these systems and

then work towards developing novel vocoding methods which would allow full parametric

flexibility of both voice source and vocal tract filter components at synthesis time.

Person Specification

Qualifications

* Bachelors degree in Electrical Engineering, Computer Science with specialisation in

speech signal processing, or related areas.

* Knowledge & Experience (Essential & Desirable)

* Strong digital signal processing skills (Essential)

*Good knowledge of HTS including previous experience developing synthetic voices

(Essential)

* Knowledge of speech production and perception (Desirable)

* Experience in speech recognition (Desirable)

Skills & Competencies

* Good knowledge of written and spoken English.

Benefits

* Opportunity to work with a world-class inter-disciplinary speech research group.

To apply, please email a brief cover letter and CV, including the names and addresses of two

academic referees, to: kanejo@tcd.ie and to cegobl@tcd.ie

 

Top

6-3(2013-01-20) Ph.D. Researcher in Speech Synthesis, Trinity College, Dublin, Ireland

Ph.D. Researcher in Speech Synthesis, Trinity College, Dublin, Ireland

Post Specification

Post Title:

Ph.D. Researcher in Speech Synthesis

Post Status:

3 years

Department/Faculty:

Centre for Language and Communication Studies (CLCS)

Location:

Phonetics and Speech Laboratory

Salary:

€16,000 per annum (plus fees paid)

Closing Date:

31st January 2013

Post Summary

A Ph.D. Researcher is required to work in the area of speech synthesis at the Phonetics and

Speech Laboratory, School of Linguistic, Speech and Communication Sciences. The position

will involve carrying out research on the topic of Hidden Markov Model (HMM)-based speech

synthesis. Specifically, we are looking for a researcher to work on developing a source-filter

based acoustic modelling for HMM-based speech synthesis which is closely related to the

human speech production process and which can facilitate modification of voice source and

vocal tract filter components at synthesis time.

Background to the Post

Much of the research carried out to date in the Phonetics and Speech Laboratory has been

concerned with the role of the voice source in speech. This research involves the development

of accurate voice source processing both as a window on human speech production and for

exploitation in voice-sensitive technology, particularly synthesis. The laboratory team is interdisciplinary

and includes engineers, linguists, phoneticians and technologists.

This post will the main be funded by the on-going Abair project which has developed the first

speech synthesisers for Irish (

www.abair.ie), and the researcher will exploit the current Abair

synthesis platform. In this project the aim is to deliver multi-dialect synthesis with multiple

personages and voices that can be made appropriate to different contexts of use. The post will

also be linked to the FastNet project which aims at voice-sensitive speech technologies. A

specific goal of our laboratory team is to leverage our expertise on the voice by improving the

naturalness of parametric speech synthesis, as well as making more flexible synthesis platforms

which can allow modifications of voice characteristics (e.g., for creating different

personalities/characters, different forms of expression etc).

Standard duties of the Post

Initially the researcher will be required to attend some lectures as part of the Masters

programme on Speech and Language Processing. This and a supervised reading

programme will provide a background in the area of voice production, analysis and

synthesis.

Top

6-4(2013-01-23) Tenure-track and Research Assistant Professor positions at the Toyota Technological Institute, Chicago
Tenure-track and Research Assistant Professor positions
Toyota Technological Institute at Chicago (http://www.ttic.edu) is a philanthropically endowed academic computer science institute, dedicated to basic research and graduate education in computer science, located on the University of Chicago campus.
TTIC opened for operation in 2003.  It currently has 9 tenure-track/tenured faculty, 10 research faculty, and a number of adjoint/visiting faculty, and is growing.  Regular faculty have a teaching load of one course per year and research faculty have no teaching responsibilities.  Research faculty positions are endowed positions (not based on grant funding) and are for a term of 3 years.
Applications are welcome in all areas of computer science, including speech and language processing, for both tenure-track and research faculty positions.
Applications can be submitted online at http://www.ttic.edu/faculty-hiring.php .  Additional questions can be directed to Karen Livescu at klivescu@ttic.edu
 
Top

6-5(2013-01-25) Postdoc at Orange Labs Lannion
Densité de présence des personnes dans les émissions télévisuelles
 
Le sujet porte sur la détection automatique de la présence des personnes dans les émissions télévisuelles, et introduit une notion qualitative et quantitative de cette présence, résumée sous la dénomination « densité de présence ».
En effet, les personnes présentes dans une émission télévisuelle n’occupent pas toutes une place équivalente, durant l’émission.
Tout d’abord, il convient de différencier la notion de présence vs la citation : par exemple, on peut distinguer plusieurs niveaux de présence :
-présence physique (ou en duplex) de la personne dans l’émission : la personne parle dans l’émission (interview etc)
-présence par extrait : l’émission montre des extraits de documents audiovisuels ou la personne parle.
-citation visuelle: la personne ne parle pas , mais est montrée (images de reportages, extrait)
-citation : on parle de cette personne, qui n’est pas présente dans l’émission
Ensuite, il convient de différencier « l’intensité » de cette présence, selon le rôle qu’occupe cette personne dans l’émission : sujet principal, témoin. Cette notion d’intensité est orthogonale au type de présence : une personne peut être présente par citation uniquement, et sujet central. C’est la combinaison des niveaux de présence et de leur intensité (rôle) qui définit ce que nous proposons de nommer « densité » de présence.
Enfin, ces notions ne sont pas nécessairement constantes tout le long de l’émission, et il convient de déterminer automatiquement les segments durant lesquelles la  densité de présence de la personne est constante. En pratique, cela permet par exemple d’extraire d’une émission uniquement le segment où telle personne est le sujet principal.
 
Le travail de ce post-doc consistera dans un premier temps à affiner ces notions de présence/intensité, pour formaliser le problème de classification/segmentation automatique associé. Il s’agira ensuite d’annoter les corpus d’émissions télévisuelles disponibles en fonction des classes de présence, puis de concevoir, développer, et tester les algorithmes permettant de répondre à ce problème.
 
Le postdoc est d’une durée de 12 mois, non-renouvelable, à Orange Labs Lannion, rémunéré 34k€ brut annuel (soit environ 2150€ net/mois).
Il doit être le premier contrat de travail après la soutenance de la thèse.
 
Pour tout renseignement : delphine.charlet@orange.com
Top

6-6(2013-01-30) 15 RESEARCH POSITIONS (PhD, Post-Doc and Research Programmer), Dublin, Ireland

15 RESEARCH POSITIONS (PhD, Post-Doc and Research Programmer) IN
MACHINE TRANSLATION, PARSING, INFORMATION RETRIEVAL, INFORMATION
EXTRACTION AND TEXT ANALYTICS AT CNGL, DCU. CLOSING DATE 15th FEBRUARY
2013
-------------

At the Centre for Next Generation Localisation (CNGL, www.cngl.ie) at
Dublin City University (DCU), Ireland, we are recruiting 11 PhDs, 3
Post-Doctoral Researchers and 1 Research Programmer (1 position).

-------------

CNGL is a €50M+ Academia-Industry partnership, funded jointly by
Science Foundation Ireland (SFI) and our industry partners, and is
entering its second cycle of funding. CNGL is looking to fill multiple
posts associated with its second phase which will focus on expansion
of our work into the challenging areas of social text sources and
multimedia content.

CNGL is an active collaboration between researchers at Dublin City
University (DCU), Trinity College Dublin (TCD), University College
Dublin (UCD), University of Limerick (UL), as well as 10 industrial
partners, including SMEs, Microsoft, Symantec, Intel, DNP, and
Welocalize.

CNGL comprises over 100 researchers across the various institutions
developing novel technologies addressing key challenges in the global
digital content and services supply chain. CNGL is involved in a large
number of European FP7 projects, as well as commercial projects in the
areas of language technologies, information retrieval and digital
content management. CNGL provides a world class collaborative research
infrastructure, including excellent computing facilities, and
administrative, management and fully integrated and dedicated on-site
commercialisation support.

The successful candidates will become part of the research team based
at DCU, joining two leading academic MT/NLP/IR and Translation
research groups (www.nclt.dcu.ie/, cttsdcu.wordpress.com/). The team’s
location at DCU, minutes from Dublin city centre, offers a highly
conducive environment for research, collaboration and innovation with
a wealth of amenities on campus.

DCU is ranked in the TOP 50 of young universities worldwide (under 50
years old) (QS Ranking) and in the TOP 100 under the Times Higher
Education (under 50 years) ranking scheme.

The research is supervised by Dr. Jennifer Foster, Dr. Sharon O'Brien,
Dr. Gareth Jones, Prof. Qun Liu and Prof. Josef van Genabith.

-----------------------------------
--* CNGL PHD STUDENSHIPS AT DCU *--
-----------------------------------

Parsing, Analytics and Information Extraction:

[PhD_CC_1] Tuning Text Analytics to User-Generated Content: Parse
quality estimation and targeted self-training.
[PhD_CC_2] Extracting Events and Opinions from User-Generated Content:
Deep parsing-based methods.

Information Retrieval:

[PhD_SD_1] Self-Managing Information Retrieval Technologies: Query,
search technique and parameter selection in information retrieval
applications
[PhD_SD_2] Indexing and Search for Multimodal (Spoken/Visual) Content:
Locating relevant content in multimodal sources

[PhD_SDCC_1] Application of Text Analytics in Information Retrieval:
Enhancing information retrieval using features from text analysis
[PhD_SDDI_1] Investigating Human-Computer Interaction Issues for
Search and Discovery with Multimodal (spoken/Visual) Content

Machine Translation:

[PhD_TL_1] Syntax- and Semantics-Enhanced Machine Learning Based MT
[PhD_TL_2] Domain Adaptation Based on Multi-Dimensional Quality
Estimation, Similarity Metrics, Clustering and Search
[PhD_TL_3] Human interaction with MT output: Usability, Acceptability,
Post-editing Research

[PhD_TLDI_1] MT and Multimodal Interaction
[PhD_TLSD_1] MT for Multimodal Cross Language Information Retrieval

Ideal candidates for the PhD studentships (except PhD_TL_3 - see
below) should have:

 - excellent computing and mathematical skills
 - strong machine learning and statistical skills
 - strong interest in basic research, applied research and showcasing
research in demonstrator systems
 - willingness to work as a team, but also be able to work on their initiative
 - strong ability in independent and creative thinking
 - strong problem-solving skills
 - excellent communication abilities (including writing and
presentation skills)
 - proficiency in English
 - a background in NLP, Computational Linguistics, Information
Retrieval, Information Extraction or Machine Translation as
appropriate for the relevant position is an advantage

Ideal candidates for the PhD studentship PhD_TL_3 (human interaction
with MT output: Usability, Acceptability, Post-editing Research)
should have:

 - excellent skills in empirical research in translation, technical
communication and/or HCI
 - strong interest in basic research, applied research and showcasing
research in demonstrator systems
 - willingness to work as a team, but also be able to work on own initiative
 - strong ability in independent and creative thinking
 - strong problem-solving skills
 - excellent communication abilities (including writing and
presentation skills)
 - high proficiency in English
 - a background in Translation, NLP, Computational Linguistics, or
Machine Translation is an advantage

PhD positions are fully funded for 3 years. Stipend: Fees + €16,000
p.a. living expenses (tax free)

-------------------------------
--* POST-DOCTORAL POSITIONS *--
-------------------------------

Parsing, Analytics and Information Extraction:

[PD_CC_1] Extracting Events and Opinions from User-Generated Content:
Parsing-based deep methods (up to 2 year contract)
[PD_CC_2] Extracting Events and Opinions from UGC: Shallow methods,
including unsupervised methods (up to 2.5 year contract)

Machine Translation:

[PD_TL_1] User/Human Centric MT (up to 2.5 year contract)

Ideal candidates for the post-doctoral positions should have:

 - a strong international publication record
 - a background in NLP, Computational Linguistics, Information
Retrieval, Information Extraction or Machine Translation as
appropriate for the position
 - excellent computing and mathematical skills
 - strong machine learning and statistical skills
 - strong interest in basic research, applied research and showcasing
research in demonstrator systems
 - willingness to work as a team, but also be able to work on their
own initiative
 - strong ability for independent and creative thinking
 - strong problem-solving skills
 - excellent communication abilities (including writing and
presentation skills)
 - proficiency in English
 - ability to identify and develop future research and funding initiatives
 - willingness to supervise and assist undergraduate and postgraduate students
 - ability to lead small teams of researchers, in co-operation with
the principal investigator


Indicative starting salary (subject to experience and qualifications):
 €37,750 - €42,394 (taxable)

-------------------------------------
--* RESEARCH PROGRAMMER POSITIONS *--
-------------------------------------
[RProg_CC_1] Research Programmer (up to 2.5 year contract)

Parsing, Analytics and Information Extraction:

Candidates for the researcher programmer position will have:
 - strong computer engineering and design skills
 - excellent knowledge of one or more of the following: Java, C++, PHP, Python
 - comfortable working in both UNIX and Windows environments
 - excellent algorithmic and analytical skills
 - candidates will hold an MSc/PhD in Computer Science/Software Engineering
 - excellent communication abilities
 - willingness to work as part of a team, but also be able to work on
your own initiative
 - experience in Natural Language Processing, Artificial
Intelligence, Information Retrieval, Localisation etc. would be highly
advantageous
 - experience with research-based software development and/or
cloud-based and grid computing technologies is also highly preferred


Indicative Salary (subject to experience and qualifications):
€37,750 - €42,394 (taxable)

-------------

**CLOSING DATE FOR APPLICATIONS ALL POSITIONS: 15th FEBRUARY 2013**
For more details please see: http://www.cngl.ie/vancancies.html

Application forms are available from:
http://www.dcu.ie/vacancies/APPLICATION_FORM_8pg.doc. Please also send
a CV with two contact points for references. When completing your
application, please indicate which positions you are applying for (in
order of preference) e.g. [PHD_TL_1], [PHD_TL_2]. Completed
application forms should be sent to Dr. Declan Groves
.


*For informal enquiries please contact the relevant PI below*:

- Dr. Jennifer Foster     [PhD_CC_1]
- Prof. Josef van Genabith
[PhD_CC_2],[PD_CC_1],[PD_CC_2],[RProg_CC_1]
- Dr. Gareth Jones     [PhD_SD_1],
[PhD_SD_2], [PhD_SDCC_1], [PhD_SDDI_1]
- Prof. Qun Liu     [PhD_TL_1], [PhD_TL_2],
[PhD_TLDI_1], [PhD_TLSD_1], [PD_TL_1]
- Dr. Sharon O'Brien    [PhD_TL_3]

Top

6-7(2013-02-01) Ph.D. Research Assistant or Post-Doctoral Researcher, Cooperative State University in Karlsruhe, Germany

This is a pre-announcement for a position in the Computer Science Department at the Cooperative State University in Karlsruhe, Germany, for a

 

Ph.D. Research Assistant or Post-Doctoral Researcher

in the field of Automatic Language Processing for Education

 

to be filled immediately with a salary according to TV-L E13 at 50% for 18 Months. The opening is in Karlsruhe, Germany, as part of a joint research project between Karlsruhe Institute of Technology (KIT), the Cooperative State University (DHBW) and the University of Education (PH) sponsored by DFG involving speech technology for educational system. (Working language: English or German)

Project cooperation partners are:

Cooperative State University (Duale Hochschule, Karlsruhe)

University of Education, Karlsruhe (Pädagogische Hochschule, Karlsruhe)

Karlsruhe Institute of Technology (KIT)

Description

Starting as soon as possible, we are seeking an experienced and motivated person to join our team of researchers from the above mentioned institutes. The ideal candidate will have knowledge in computational linguistics and algorithm design. Responsibilities include the use and improvement of research tools to update and optimize algorithms applied to diagnostics in children’s (German) writing using speech recognition and speech synthesis tools. For further details of this work, please refer to publications at SLaTE 2011, Interspeech 2011, and WOCCI 2012 by authors Berkling, Stüker, and Fay. Joint and collaborative research between the partners will be very close, offering exposure to each research lab. 

 

Candidates:

  • Doctoral Research Candidates may apply and are welcome for joint research with their host institution.

  • Experienced (post-doctoral) Research Candidates are already in possession of a doctoral degree or have at least 3 years but less than 5 years of research experience in engineering and/or hearing research.

 

Requirements

  • Higher degree in speech science, linguistics, machine learning, or related field

  • Experience developing ASR applications - training, tuning, and optimization

  • Software development experience (for example: Perl, TCL, Ruby, Java, C)

  • Excellent communication skills in English

  • Willingness and ability to spend 18 months in Germany, working in a team with project partners

  • Knowledge of German linguistics, phonics, graphemes, morphology or willingness to learn

  • Strong interest in computational linguistics, morphology, phonics for German

 

Desirable:

  • Interest in Education and language learning

  • Interest in Human Computer Interaction and game mechanics

  • Ability to create Graphic interfaces in multi-player applications

Application Procedure: Non-EU candidates need to check their ability to reside in Germany. Interested candidates, please send application (CV, certified copies of all relevant diplomas and transcripts, two letters of recommendation , proof of proficiency in English, letter of motivation (research interest, reason for applying to position) to: Berkling@dhbw-karlsruhe.de

The Cooperative State University is pursuing a gender equality policy. Women are therefore particularly encouraged to apply. If equally qualified, handicapped applicants will be preferred.

 

Top

6-8(2013-02-20) Research Associate in Robust Speech Recognition at the University of Shefield UK

-- Research Associate in Robust Speech Recognition
===============================================

The University of Sheffield, Department of Computer Science invite applications for a position as Research Associate to work on a project to research and develop robust technology for recognition of speech in meetings.  The associated project, DocuMeet, is funded by the European Union and involves collaboration with partners from academia and industry across Europe. The Speech and Hearing Research Group is responsible for speech technology in the project, but also contributes to some natural language understanding tasks.

Speech transcription of meeting data is a well-established task, with international competitions (held by U.S. NIST), and supported by several large-scale research projects. While significant progress has been made, the performance of recognition, detection and analysis systems is still very far from usable in many realistic, natural scenarios. There are many significant challenges seeking a solution: the acoustic complexity of meetings goes well beyond standard settings.: Noise and reverberation are standard; speech signals show significant amounts of overlap between speakers; varying degrees of emotion are present; and speakers are moving. All of these pose significant challenges to speech research and practical applications.

In the DocuMeet project we specifically work on speech recognition robustness to noise and reverberation. We aim to work on new algorithms that allow to factor environment and context in novel ways (e.g. eigen-environments).  The recordings from multiple microphones can be used to remove unwanted acoustics, while knowledge about a specific environment type should be used to adjust acoustic models of the recognition systems. Further we will investigate how such algorithms can be integrated with personalisation (acoustic/language) and how metadata can be used to inform such processes. Extensive experimentation of existing and new corpora will be required to demonstrate the effectiveness of the new techniques.

Applicants are required to have a track record of work on speech technologies including speech recognition, and to have had exposure to modern machine learning techniques. Ideally, such a track record is demonstrated by publications in international journals and conferences. The successful candidate will be required to hold a PhD in the field; work on the project will require publication of results, travelling to conferences and extensive visits to itslanguage offices. At this point the project duration is for one year, but extensions are likely.
The project will be embedded in the Speech and Hearing (SpandH) research group at (http://www.dcs.shef.ac.uk/spandh) in the Department of Computer Science, and in particular the subgroup on machine intelligence for natural interfaces (MINI). SpandH is amongst the largest speech research groups in the UK, with extensive infrastructure and a vibrant research environment. The group is well known internationally for its research, which reaches across traditional divides to encompass and link computational hearing, speech perception and speech technology. The MINI subgroup is led by Prof. Hain, currently has 13 members, and is amongst other things well known for speech recognition and classification. It has had systems with best performance in international competitions that are available to the public at www.webasr.org. The subgroup is currently involved in many projects, including an EPSRC programme grant (with Univ. of Cambridge, Univ. of Edinburgh), research organisations (e.g. Idiap, NICT), and Industry (e.g. Cisco, Google). It has its own extensive computing infrastructure, access to large quantities of data, as well as dedicated recording facilities.

The Department of Computer Science, which is a member of the Faculty of Engineering, was established in 1982 and has since attained an international reputation for its research and teaching. Currently there are over 100 members of staff in Computer Science, including 35 Academics. The Department has an international reputation for the quality of its research, and was awarded grade 5 in the 2001 research assessment exercise, and in the 2008 exercise, 65% of our research was rated world leading or internationally excellent in terms of its originality, significance and rigor.

If you would like to know more about this position, please contact Prof. Thomas Hain - t.hain@dcs.shef.ac.uk.

In order to apply, the best option is to visit jobs.ac.uk and then press the 'Apply' button on the page:

  http://www.jobs.ac.uk/job/AFU678/research-associate/

The University of Shefield JOB ID is UOS005891.

Top

6-9(2013-02-10) Developer for Large-Scale Audio Indexing Technologies at IRCAM, Paris

 

Position: Developer for Large-Scale Audio Indexing Technologies: 1 W/M position at IRCAM

Starting:  March 2013

Duration: 18 months

Position description

He/she will be in charge of the development of a framework for scalable storage, management and access of distributed data (audio and meta-data). He/she will be also in charge of the development of scalable search algorithms.

Required profile:

·       High skill in database management systems

·       High skill in scalable indexing technologies (hash-table, m-trees …)

·       High skill in C++ development (including template-based meta-programming)

·       Good knowledge of Linux, Mac OSX and Windows development environment (gcc, Intel and MSVC, svn)

·       High productivity, methodical work, excellent programming style.

 

The developer will collaborate with the project team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).  

Introduction to IRCAM

IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies, and musicology. Ircam is located in the center of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

Salary

According to background and experience

Applications

Please send an application letter together with your resume and any suitable information addressing the above issues preferably by email to: peeters_at_ircam_dot_fr with cc to vinet_a_t_ircam_dot_fr, roebel_at_ircam_dot_fr.

 

 

 

   

   

   

 

Top

6-10(2013-02-14) INRIA PhD fellowship, Bordeaux, F

Proposal for an INRIA PhD fellowship (Cordi-S)

Title of the proposal: Nonlinear speech analysis for differential diagnosis between Parkinson's disease and Multiple-System Atrophy

 

Project Team INRIA: GeoStat (http://geostat.bordeaux.inria.fr/)

Author of the proposal research subject: Khalid Daoudi (khalid.daoudi@inria.fr)

Keywords: speech processing, nonlinear speech analysis, machine learning, voice pathology,

dysphonia, dysarthria, Multiple-System Atrophy, Parkinson's disease.

Scientific context:

Parkinson's disease (PD) is the most common neurodegenerative disorder after Alzheimer's disease.

Prevalence is 1.5% of the population over age 65 and affects about 143,000 French. Given the aging of the population, the prevalence is likely to increase over the next decade.

Multiple-System Atrophy (MSA) is a rare and sporadic neurodegenerative adult disorder, of

progressive evolution and of unknown etiology. The MSA has a prevalence of 2 to 5/100 000 and has no effective treatment. It usually starts in the 6th decade and there is a slight male predominance. It takes 3 years on average from the first signs of the disease for a patient to require a walking aid, 4-6 years to be in a wheelchair and about 8 years to be bedridden.

The PD and MSA require different treatment and support. However, the differential diagnosis between PD and MSA is a very difficult task because, at the early stage of the diseases, patients look alike as long as signs, such as dysautonomia, are not more clearly installed for MSA patients. There is currently no valid clinical nor biological marker for clear distinction between the two diseases at an early stage.

Goal:

Voice and speech disorders in Parkinson's disease is a clinical marker that coincides with a motor

disability and the onset of cognitive impairment. Terminology commonly used to describe these

disorders is dysarthria [1].

Like PD patients, depending on areas of the brain that are damaged, people with AMS may also have speech disorders: difficulties of articulation, staccato rhythm, squeaky or muted voice. Dysarthria in AMS is more severe and early in the sense that it requires more early rehabilitation compared to PD.

Since dysarthria is an early symptom of both diseases, the purpose of this thesis is to use dysarthria,

through digital processing of voice recordings of patients as a mean for objective discrimination

between PD and MSA. The ultimate goal is to develop a numerical dysarthria measure, based on the

analysis of the speech signal of the patients, which allows objective discrimination between PD and

MSA and would thus complement the tools currently available to neurologists in the differential

diagnosis of the two diseases.

Project:

Pathological voices, such as in PD and MSA, generally present high non-linearity and turbulence.

Nonlinear/turbulent phenomena are not naturally suited to linear signal processing. The latter is

however ruling over current speech technology. Thus, from the methodological point of view, the goal of this thesis is to investigate the framework of nonlinear and turbulent systems, which is better suited to analyzing the range of nonlinear and turbulent phenomena observed in pathological voices in general [2], and in PD and MSA voices in particular. We will adopt an approach based on novel nonlinear speech analysis algorithms recently developed in the GeoStat team [3]. The goal being to extract relevant speech features to design new dysarthria measures that enable accurate discrimination between

PD and MSA voices. This will also require investigation of machine learning theory in order to develop robust classifiers (to discriminate between PD and MSA voices) and to make correspondence (regression) between speech measures and standard clinical rates.

The PhD candidate will actively participate, in coordination with neurologists from the Parkinson's

Center of Haut-Lévêque Hospital, to set up the experimental protocol and data collection. The latter

will consist in recording patient's voices using DIANA or EVA2 workstation (http://www.sqlab.fr/).

 

References:

[1] Auzou, P.; Rolland, V.; Pinto, S., Ozsancak C. (eds.). Les dysarthries. Editions Solal. 2007.

[2] Baghai-Ravary L. ; Beet S.W. Automatic Speech Signal Analysis for Clinical Diagnosis and Assessment of Speech Disorders. Springer 2013.

[3] PhD thesis of Vahid Khanagha. GeoStat team, INRIA Bordeaux-Sud Ouest. January 2013.

http://geostat.bordeaux.inria.fr/images/vahid%20khanagha%204737.pdf

Advisor: K. Daoudi

Duration: 3 years (starting fall 2013)

Prerequisites: Good level in signal/speech processing is necessary, as well as Matlab and C/C++

programing. Knowledge in machine learning would be a strong advantage.

 

Top

6-11(2013-02-19) Ph.D Student in Speech and Music Communication , KTH Stockholm Sweden

Ph.D Student in Speech and Music Communication  

KTH School of Computer Science and Communication (CSC) announces a PhD position in Speech and Music Communication.

The Workplace

KTH in Stockholm is the largest and oldest technical university in Sweden. No less than one-third of Sweden’s technical research and engineering education capacity at university level is provided by KTH. Education and research spans from natural sciences to all branches of engineering and includes architecture, industrial management and urban planning. There are a total of just over 15,000 first and second level students and more than 1,600 doctoral students. KTH has almost 4,300 employees.

KTH Computer Science and Communication is one of the most outstanding research and teaching environments in Information Technology in Sweden with activities at KTH and partly at Stockholm University. We conduct education and research in theoretical computer science, from theory building and analysis of mathematical models to algorithm construction, implementation and simulation. The applied computer science research and education dealing with computer vision, robotics, machine learning, computational biology, neuroinformatics and neural networks, including high performance computing, visualization and speech and music communication. It also conducts applied research and training in media technology, human-computer interaction, interaction design and sustainable development.

For more information about CSC, go to             www.kth.se/csc.

Assignment

KTH School of Computer Science and Communication (CSC) announces a PhD position in Speech and Music Communication at the Department of Speech, Music and Hearing. The thesis work will be directed towards basic research on simulating the human voice through advanced computer models. It comprises theoretical as well as experimental studies of speech production.

For more information about the research project: http://www.speech.kth.se/eunison

This is a four-year time-limited position that can be extended up to a year with the inclusion of a maximum of 20% departmental duties, usually teaching.  Doctoral students must be registered at KTH. Expected starting date: 2013-09-02.

Employment

Form of employment: Time-limited Work time: Full time The salary follows the directions provided by KTH Start date: According to agreement, preferably 2013-09-02. Number of positions: 1

Qualifications

The applicant should, at the time of application or no later than the expected starting date, possess a Master of Science degree in computer science, electrical engineering or engineering physics, or the equivalent.

In addition, thorough knowledge in several of the following areas are required: programming, phonetics or speech technology, statistical methods, computer simulations and multi-physics. The applicant should demonstrate a high proficiency in both written and spoken English.

Applicants must be strongly motivated for doctoral studies, possess the ability to work independently and perform critical analysis as well as possess good levels of cooperative and communicative abilities.

Application

Application deadline: March 22, 2013 Employer's reference: D-2013-0107

Applications via email are to be sent to: Camilla Johansson, e-mail             jobs@csc.kth.se.

Write reference number in the email subject. (CV, etc. should be sent as an attachment, as pdf-files.)

We also accept hard copy applications sent to:

KTH, CSC Att. Camilla Johansson, Lindstedtsvägen 3, 4th floor, SE-100 44 Stockholm, Sweden

Application

The application must be written in English and contain:

  1. Cover letter: A summary of the application that describes the particular merits that make the applicant suitable for the open position. Maximum one (1) page.
  1. 
Curriculum Vitae
  1. Official record of transcripts and copy of degree certificate. The documents should be in English or be accompanied by an authorized translation to English.
  1. References: Please provide detailed contact information to at least two (2) references.
  1. Statement of purpose:  Please discuss your research interests and motivation for carrying out PhD studies at KTH, and how this can be demonstrated in your earlier experiences (studies, technical knowledge, other assignments etc.). Maximum two (2) pages.
  1. Summary of relevant publications: If existing, provide a list of publications, sorted according to relevance for this position. The list should for each publication give a short summary and a web-link to the full text.

Applicants are kindly ask to also fill in the form at http://www.speech.kth.se/eunison/phd-applicants.html

We are currently gathering information to help improve our recruitment process. We would, therefore, be very grateful if you could include an answer to the following question within your application: Where did you initially come across this job advertisement?

Contact(s)

For enquiries about Ph.D studies and employment conditions please contact:

Eva-Lena Åkerman, HR Manager Phone: +46 8 790 91 06 Email:             ela@csc.kth.se        

For enquiries about the project please contact:

Olov Engwall, Professor in Speech Communication Telephone: +468-790 75 35 e-mail:             engwall@kth.se    

Union representative

Lars Abrahamsson, SACO Phone: +46 8 790 7058 Email:             lars.abrahamsson@ee.kth.se

Top

6-12(2013-02-19) Poste de Maître de conférences en informatique est ouvert à l'Université Paris-Sud, à destination de l'IUT d'Orsay. France

Un poste de Maître de conférences en informatique est ouvert au concours à l'Université Paris-Sud, à destination de l'IUT d'Orsay.

Le traitement de la parole (de l'interaction vocale à l'indexation de documents multimédia) fait partie des thématiques de recherche prioritaires du poste ; ce sujet est développé au LIMSI dans le groupe TLP (voir aussi http://www.limsi.fr/tlp/postes13.html).

Pour consulter la fiche détaillée du poste :

http://www.u-psud.fr/_attachments/enseignants_chercheurs/Fiche%2520emploi%252027%2520MCF%25201905x.pdf?download=true

et les modalités pour le dossier de candidature (à déposer avant le 28 mars 2013) :

http://www.u-psud.fr/fr/recrutement/enseignants/enseignants_chercheurs.html

Top

6-13(2013-02-25) Post-doc de durée de 12 mois, INRIA-LORIA, Nancy, France

Dans le cadre du projet ANR ContNomina(2013-2016), nous proposons

un post-doc de durée de 12 mois, financé par ce projet :

Détection des noms propres dans des transcriptions automatiques de la parole en Français

Bien que la reconnaissance d’entités nommées en anglais offre des performances excellentes, ce n’est pas le cas pour les autres langues, dans les domaines d’application avec peu de données d’apprentissage, et sur des transcriptions automatiques. Le travail demandé est donc de proposer des solutions permettant de : (i) exploiter les mesures de confiance du système de reconnaissance et le contexte lexical afin de localiser les noms propres, qu’ils soient présents ou non dans le lexique de transcription ; et (ii) pallier l’absence ou à la rareté des données d’apprentissage, ce qui donnera au système développé un caractère incrémental et auto-adaptatif très important pour envisager une valorisation à long terme des résultats, au-delà de la durée du projet lui-même. Les modèles bayésiens génératifs forment un cadre théorique intéressant à explorer pour résoudre ce défi.

 

Personnes à contacter :

Irina Illina, Responsable du projet ANR ContNomina, INRIA-LORIA, Nancy, équipe Parole, tel 03 54 95 84 90, illina@loria.fr

Christophe Cerisara,  INRIA-LORIA, Nancy, équipe Sinalp, tel 03 54 95 86 25, cerisara@loria.fr

 

Top

6-14(2013-02-21) 4 positions for doctoral students, University of Gothenburg, Sweden

My department is now announcing four (not much, but still) positions for doctoral students. The positions mean four years fully financed (full time salary on the order of € 2500 per month), office space, computer and other technical resources necessary to do the job. You will find more information here:

http://www.flov.gu.se/english/education/doctoral-studies-third-cycle/admission/?languageId=100001&contentId=-1&disableRedirect=true&returnUrl=http%3A%2F%2Fwww.flov.gu.se%2Futbildning%2Fforskarniva%2Fansokan%2F

I would greatly appreciate if you could make this opportunity known among your students.

Anders Eriksson, MSc, PhD.
Professor of Phonetics
Department of Philosophy, Linguistics and Theory of Science
University of Gothenburg
Box 200, SE-405 30 Gothenburg, SWEDEN
Web: http://www.ling.gu.se/~anders <http://www.ling.gu.se/%7Eanders>

 

Top

6-15(2013-02-22) Post-doc at François Rabelais University, Tours (France)

Job offer: Post-doc
    A post-doctoral position in linguistics is available at François     Rabelais University, Tours (France), to work with the linguists in     the Inserm Research Unit 930 Imagerie & cerveau on a research     project funded by the French National Research Agency, BiLaD     (Développement du langage bilingue : Enfants à développement typique     et enfants avec troubles du langage).
    Starting date: September 1st, 2013 for a duration of 12 months     (renewable for a total of 24 months).
    To apply, send a CV + names and contact information for two     recommenders to Laurie Tuller (tuller@univ-tours.fr).
    Responsibilities: Organization, processing, and analysis of     linguistic and psycholinguistic data from experimental and     standardized tasks (verbal and non-verbal).
    Required training and skills: PhD in linguistics or related field     (psychology, cognitive science) with specialty in language     acquisition. Solid experience in transcribing linguistic data, and     in use of data entry and processing software (Excel, SPSS, PRAAT).
    Other skills: Experience with children with SLI, familiarity with     the field of speech-language pathology/therapy and bilingualism.     Ability to work in French and in English required; mastery of     Turkish or Portuguese would be an asset.
    Deadline : May 15, 2013.

Top

6-16(2013-02-22) Researcher/PostDoc/Intern positions at Telfonica Research Barcelona, Spain
We are seeking candidates for Researcher/PostDoc/Intern positions to
strengthen and complement our efforts in the areas we currently work
on:
- Distributed systems and networking
- Human&computer interaction
- Mobile computing
- Multimedia analysis
- Speech processing
- Recommender systems
- User modeling and machine learning
- Security and privacy
 
The Telefonica Digital Research group was created in 2006 and follows
an open research model in collaboration with Universities and other
research institutions, promoting the dissemination of scientific
results both through publications in top-tier peer-reviewed
international journals and conferences and technology transfer. Our
multi-disciplinary and international team comprises more than 20 full
time researchers, holding PhD degrees in various disciplines of
computer science and electrical engineering.
 
The salaries we offer are competitive and will depend upon the
candidate's experience. We also offer great benefits and a stimulating
and friendly working atmosphere in one of the most vibrant cities in
the world, Barcelona (Spain).
 
You can find more information about the group here:
http://www.tid.es/en/Research/Pages/TIDResearchHome.aspx
 
To apply for a position at Telefonica Research Barcelona, please send
an e-mail with your cv and research statement to:
careers_research@tid.es
 
Applications submitted by March 10, 2013 will receive full
consideration, although we will continue to accept applications after
this date until all positions are filled up.
 
 
Top

6-17(2013-03-01) Maître de Conférences en Informatique: Traduction automatique statistique, Le Mans, France

Poste de Maître de Conférences en Informatique
Profil 'Traduction automatique statistique'
Université du Maine, Le Mans
Référence GALAXIE : 4055


Profil recherche : 

Le candidat doit avoir des connaissances approfondies dans au moins un des domaines suivantes : la traduction automatique statistique, le traitement automatique de la langue, l'apprentissage automatique. Une expérience dans la construction de grands systèmes de traduction avec l'outil Moses est particulièrement bienvenue.


Les candidats sont invités à se manifester auprès de :
Holger Schwenk : holger.schwenk@lium.univ-lemans.fr

Lieu(x) d’exercice : LIUM, Université du Maine, site du Mans
Nom directeur labo : Yannick Estève
Tel directeur labo : 02 43 83 3874
Email directeur labo : yannick.esteve@lium.univ-lemans.fr 
URL labo : http://www-lium.univ-lemans.fr


Descriptif du laboratoire :
Le Laboratoire d'Informatique de l'Université du Maine (LIUM - EA 4023), créé il y a environ vingt-cinq ans, regroupe la plupart des enseignants-chercheurs en informatique de l’Université du Maine. Il comprend actuellement environ 50 personnes : 20 enseignants-chercheurs, 8 post-doctorants, 13 doctorants et 4 BIATOSS. Le LIUM est composé de deux équipes : une équipe de neuf enseignants-chercheurs permanents spécialisés en Environnements informatiques pour l'apprentissage humain (Ingénierie des EIAH), dirigée par Christophe Choquet ; une équipe de onze enseignants-chercheurs spécialisés en reconnaissance de la parole et traduction automatique (Language and Speech Technology LST), dirigée par Paul Deléglise.


Profil enseignement : 

Le candidat devra être capable d'enseigner en français les matières fondamentales de l'informatique (algorithmique, programmation orientée objet, réseaux, bases de données, génie logiciel, etc).


Nom du responsable du Département (enseignement) : 
Christophe Després Christophe.Despres@univ-lemans.fr

Top

6-18(2013-03-02) Researcher/PostDoc/Intern positions at Telefonica, Barcelona, Spain
We are seeking candidates for Researcher/PostDoc/Intern positions to
strengthen and complement our efforts in the areas we currently work
on:
- Distributed systems and networking
- Human&computer interaction
- Mobile computing
- Multimedia analysis
- Speech processing
- Recommender systems
- User modeling and machine learning
- Security and privacy
 
The Telefonica Digital Research group was created in 2006 and follows
an open research model in collaboration with Universities and other
research institutions, promoting the dissemination of scientific
results both through publications in top-tier peer-reviewed
international journals and conferences and technology transfer. Our
multi-disciplinary and international team comprises more than 20 full
time researchers, holding PhD degrees in various disciplines of
computer science and electrical engineering.
 
The salaries we offer are competitive and will depend upon the
candidate's experience. We also offer great benefits and a stimulating
and friendly working atmosphere in one of the most vibrant cities in
the world, Barcelona (Spain).
 
You can find more information about the group here:
 
To apply for a position at Telefonica Research Barcelona, please send
an e-mail with your cv and research statement to:
 
Applications submitted by March 10, 2013 will receive full
consideration, although we will continue to accept applications after
this date until all positions are filled up.
Top

6-19(2013-03-03) Six positions at Nuance, Vienna, Austria

 

 

Nuance Healthcare, a division of Nuance Communications, is the market leader in providing clinical understanding solutions that accurately capture and transform the patient story into meaningful, actionable information. Thousands of hospitals, providers and payers worldwide trust Nuance speech-enabled clinical documentation and analytics solutions to facilitate smarter, more efficient decisions across the healthcare enterprise. These solutions are proven to increase clinician satisfaction and HIT adoption, supporting organizations to achieve Meaningful Use of EHR systems and transform to the accountable care model. Nuance Healthcare has been recognized as “Best-in-KLAS” 2004-2012 for Speech Recognition.

 

 

Research Scientist (m/f) – Computational Linguist NLP

Preferred Vienna / Austria

As a Research Scientist - Computational Linguist - you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development of algorithms, resources and methods to support data collection and improve accuracy of Nuance healthcare products.    

 

 

Your Task.

  • Development of methods, algorithms, resources and tools in the area of natural language processing (NLP) and computational linguistics
  • Experimental and theoretical analysis of computational linguistics and NLP related problems
  • Following academic development in speech recognition area, attending conferences and writing scientific papers (if applicable)

Your Profil.

  • PhD or Master degree in (computational) linguistics, engineering sciences, mathematics or physics    
  • Knowledge in machine learning methods (statistical, rule-based)
  • Analytical and problem-solving skills
  • Computer programming (C++ and/or scripting languages)
  • Team player but also able to work on his own initiative
  • Willing to learn
  • English language    

Preferred

  • Experience (industrial or academic) in NLP or computational linguistics
  • Knowledge in automatic speech recognition
  • Linguistic knowledge    

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products  
  • full time &   permanent employment
  • International   projects and international teams
  • Development within   Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/9447/job or via EMEAjobs@nuance.com  reference number 9447-Research Scientist - Comp Linguist NLP. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

Research Scientist (m/f) – Language Modeling

Preferred Vienna / Austria

As a Research Scientist you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development of ASR algorithms, resources and methods to improve accuracy and performance of Nuance healthcare products.

 

Your Task.

  • Work in language modeling and grammar design area
  • Development of training and adaptation tools and algorithms
  • Performing speech recognition (computer) experiments and studies
  • Experimental and theoretical analysis of speech recognition problems
  • Following academic development in speech recognition area, attending conferences and writing scientific papers (if applicable)

Your Profil.

  • PhD or Master degree in engineering sciences, mathematics or physics
  • Knowledge in automatic speech recognition methods and algorithms
  • Analytical and problem-solving skills
  • Statistical data analysis
  • Scripting languages in Unix/Linux environment (e.g. Perl, Python, Bash)
  • Team player but also able to work on his own initiative
  • Willing to learn
  • English language

Preferred

  • Experience (industrial or academic) in language modeling area
  • Statistical pattern recognition
  • Linguistic knowledge

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products  
  • full time &   permanent employment
  • International   projects and international teams
  • Development within   Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/9446/job or via EMEAjobs@nuance.com  reference number 9446-Research Scientist - LM. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

 

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

Speech Scientist (m/f) - Linguist

Preferred Vienna / Austria

As a Speech Scientist - (Computer) Linguist - you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development of resources, algorithms and methods to improve accuracy and performance of Nuance healthcare products.

 

 

Your Task.

  • Development of rules (grammars), lexicons and statistic models for speech recognition
  • Performing speech recognition (computer) experiments and studies
  • Following academic development in speech recognition area, attending conferences and writing scientific papers (if applicable)

Your Profil.

  • PhD or Master degree in linguistic, machine learning, computer linguistic or in a related field
  • (Computer) linguistic and/or machine learning knowledge
  • Scripting languages (e.g. Perl and/or Python)
  • Analytical and problem-solving skills
  • Team player but also able to work on his own initiative
  • Communication / coordination skills
  • Willing to learn
  • English language

Preferred

  • Experience in the area of automatic speech recognition
  • Knowledge of context independent grammars
  • Working in Unix/Linux and Windows environment

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products  
  • full time &   permanent employment
  • International   projects and international teams
  • Development within   Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/9445/job or via EMEAjobs@nuance.com  reference number 9445-Speech Scientist Linguist. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

 

°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

 

Research Scientist (m/f) – Acoustic Modeling

Preferred Vienna / Austria

As a Research Scientist you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development of ASR algorithms, resources and methods to improve accuracy and performance of Nuance healthcare products.

  

 

Your Task.

  • Work in acoustic modeling area
  • Development of training and adaptation tools and algorithms
  • Performing speech recognition (computer) experiments and studies
  • Experimental and theoretical analysis of speech recognition problems
  • Following academic development in speech recognition area, attending conferences and writing scientific papers (if applicable)

Your Profil.

  • PhD or Master Degree in engineering sciences, mathematics or physics
  • Knowledge in automatic speech recognition methods and algorithms
  • Analytical and problem-solving skills
  • Statistical data analysis
  • Scripting languages in Unix/Linux environment (e.g. Perl, Python, Bash)
  • Team player but also able to work on his own initiative
  • Willing to learn
  • English language

Preferred

  • Experience (industrial or academic) in acoustic modeling area
  • Statistical pattern recognition
  • Signal processing

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products
  • full time & permanent employment
  • International projects and international teams
  • Development within Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/9444/job or via EMEAjobs@nuance.com  reference number 9444-Research Scientist - AM. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

 °°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

 

Research Scientist (m/f) - Acoustic Modeling

Preferred Vienna / Austria

As a Research Scientist you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development in area of acoustic modeling to improve accuracy and performance of Nuance healthcare products.

 

 

Your Task.

  • Work in acoustic modeling area (e.g. creation of acoustic models for various languages)
  • Performing speech recognition (computer) experiments and studies
  • Experimental and theoretical analysis of speech recognition problems
  • Following academic development in speech recognition area, attending conferences and writing scientific papers (if applicable)

Your Profil.

  • PhD or Master degree in engineering sciences, mathematics or physics
  • Knowledge in automatic speech recognition methods and algorithms
  • Analytical and problem-solving skills
  • Statistical data analysis
  • Scripting languages in Unix/Linux environment (e.g. Perl, Python, Bash)
  • Team player but also able to work on his own initiative
  • Willing to learn
  • English language

Preferred

  • Experience (industrial or academic) in acoustic modeling area
  • Statistical pattern recognition
  • Signal processing

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products  
  • full time &   permanent employment
  • International   projects and international teams
  • Development within   Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/9443/job or via EMEAjobs@nuance.com  reference number 9443-Research Scientist AM. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

 °°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°°

 

Research Scientist (m/f) – Automatic Speech Recognition

Preferred Vienna / Austria

As a Research Scientist you will be part of the Healthcare Automatic Speech Recognition Research team. You will work on research and development of ASR algorithms, resources and methods to improve accuracy and performance of Nuance healthcare products.

 

Your Task.

  • Application specific speech recognition research
  • Tuning and optimization of speech recognition algorithms
  • Software implementation
  • Analysis of application specific speech recognition issues
  • General expertize in speech recognition area

Your Profil.

  • PhD or Master degree   in engineering sciences, mathematics or physics
  • Analytical and   problem-solving skills
  • Statistical data   analysis
  • Linux OS including   bash
  • Team player but also   able to work on his own initiative
  • Willing to   learn
  • English   language
  • Either
         
    • strong background     in the area of automatic speech recognition    
    • some experience in     software development

or

         
    • strong background     in software development    
    • some experience in     the area of automatic speech recognition

             

Preferred Skills:

  • Experience with an   object-oriented programming language
  • Experience with   scripting languages like Perl and Python
  • Statistical pattern   recognition

Our offer.

 

We offer a competitive compensation package and a casual yet technically challenging work environment. Join our dynamic, entrepreneurial team and become part of our fast growing track of continuing success.  Nuance is an Equal Opportunity Employer.

 

  • Innovative products  
  • full time &   permanent employment
  • International   projects and international teams
  • Development within   Nuance organization

 

Does Nuance speak to you?

 

Please apply via our Recruiting tool on our homepage https://jobs-nuance.icims.com/jobs/8047/job or via EMEAjobs@nuance.com  reference number 8047-Research Scientist - ASR. Please provide CV, supporting documents and letter of motivation including preferred country, start date and salary expectations.

 

For more information visit us www.nuance.com.

 

ILONA ALEXANDRA HOLTZ

Recruiter - Employment Specialist DACH

Human Resources

Nuance Communications Deutschland GmbH

Site Ulm

Soeflingerstr. 100

D-89077 Ulm, Germany

Fon       +49 731 - 379 50 1166

Fax       +49 731 - 379 50 1106 (Zentrale)

Mobil     +49 170 56 15 235

 

WWW.NUANCE.COM The experience speaks for itself ™

Geschäftsführung/Director: Caroline Curtis, Todd Michael DuChene, Thomas L. Beaudoin

Sitz der Gesellschaft/Registered Office: Aachen

Registergericht/Court of Registration: Aachen

Reg. Nr.: HRB 16313

USt-ID/VAT: DE 264500438

This electronic transmission and any files transmitted with it are confidential. It is transmitted for the sole use of the person(s) to whom it is addressed. Any further distribution or copying is prohibited. If you receive this message in error, please inform the sender immediately, do not use it or disclose its contents and delete it from your system. Please note that Nuance cannot guarantee that the transmission will be secure or error-free.

 

Experience Nuance in the web:

DragonDrive!

http://www.youtube.com/watch?v=wYIwVP2JqL4

DragonNaturallySpeaking - Say It Your Way!

http://www.youtube.com/watch?v=RkiYr8aw5pE

or meet me on XING or LinkedIn

 

Top

6-20(2013-03-13) Speech Data Evaluator for French at Google Dublin Ireland

Speech Data Evaluator for French


Job title:

Speech Data Evaluator for French (multiple positions)

In Dublin.


Job description:

As a Speech Data Evaluator and a native speaker of French, you will be part of a team based in Dublin, processing large amounts of linguistic data and carrying out a number of tasks to improve the quality of Google’s speech synthesis and speech recognition in your own language.

This includes:

  • annotating and classifying linguistic data

  • labeling text for disambiguation, expansion, and text normalization

  • providing phonetic transcription of lexicon entries according to given standards and using in-house tools

Job requirements:

  • native speaker of French (with good command of the standard dialect) and fluent in English

  • computer-literate (should feel comfortable using in-house tools)

  • attention to detail

  • good knowledge of orthography and grammar in French

  • passion for language and a keen interest in technology

  • good organizational skills

  • a degree in a language-related field such as linguistics, language teaching, translation, editing, writing, proofreading, or similar

Project duration: 6-9 months (with potential for extension)

For immediate consideration, please email your CV and cover letter in English (PDF format preferred) with 'Speech Data Evaluator French' in the subject line.

 

Email Address for applications: DataOpsMan@gmail.com

Contact information: Linne Ha

Closing date: open until filled
 
Top

6-21(2013-03-15) Offre de post-doc en Traitement Automatique du Langage au LIUM Le Mans France

Offre de post-doc au LIUM
Traitement Automatique du Langage

Correction de sorties d’OCR, traduction automatique statistique, modélisation du langage.
Offre de postdoc au sein du laboratoire d’Informatique de l’Université du Maine (LIUM)  dans le domaine de la correction orthographique par méthodes de traduction automatique statistique.
Résumé de l’offre –  Thématiques : traitement du langage naturel, application à la correction d’OCR et traduction automatique statistique. –  Lieu : LIUM (Le Mans), équipe LST (http ://www-lium.univ-lemans.fr/). –  Période : disponible dès maintenant pour une durée d’un an renouvelable.
Contexte Ce postdoc s’inscrit dans le projet PACTE ('investissement d’avenir'), porté par l’entreprise Diadeis,  et dont sont également partenaires l’équipe Alpage (INRIA et Paris 7), et les entreprises A2ia et Isako.  PACTE a pour objectif l’amélioration de la qualité orthographique des textes issus de différentes méthodes de capture textuelle.  L’accent est mis sur les sorties d’OCR (reconnaissance optique de caractères) sur des textes imprimés scannés,  mais concerne également des données obtenues par reconnaissance d’écriture manuscrite, par saisie manuelle, et par rédaction directe.  Les techniques qui seront utilisées sont à la fois statistiques et hybrides, faisant usage d’outils et de ressources de linguistique computationnelle.
Objectifs Vérification et correction des sorties d’OCR par des méthodes de modélisation statistique du langage.  Les systèmes OCR utilisés n’exploitent pas ou peu de connaissance sur la langue.  L’objectif est d’exploiter la modélisation de la langue afin combler ce manque.
Utilisation de la traduction automatique statistique pour la correction d’erreurs des sorties d’OCR.  La correction des sorties d’OCR peut être vue comme une tâche de traduction d’un texte erroné vers un texte correct.  Dans le cadre de l’OCR, le paradigme de traduction doit être adapté afin de prendre en compte les spécificités de la tâche.
Le cadre applicatif de ce travail est assez exceptionnel, avec l’exploitation d’une grande quantité de données  issue notamment du Bureau Européen des Brevets (EPO - European Patent Office) et du Journal Officiel de l’Union Européenne.
Profil recherché – Compétences en informatique : environnement Linux, C++, scripting, etc. ; – Connaissances en apprentissage automatique, linguistique computationelle. – Une expérience en traduction automatique statistique est un plus. Le postdoc se déroulera au sein de l’équipe LST du LIUM.  Le LIUM est connu au niveau international pour ses recherches dans le domaine de la traduction automatique statistique,  et possède de nombreuses collaborations avec des universités et entreprises en Europe et aux États-Unis.
Contacts Envoyer une lettre de motivation et un CV montrant vos compétences pour ce poste aux adresses suivantes : Loïc Barrault : loic.barrault@lium.univ-lemans.fr  Holger Schwenk : holger.schwenk@lium.univ-lemans.fr 

Top

6-22(2013-03-20) Post-doc job proposal, LIMSI, France

Postdoctoral position at LIMSI, France
Expressive Vocal Signal Analysis and Modelling

Content
During the post-production of a movie, as well as for video games, digital double rendering techniques are used to modify the actor’s performance without the need of playing the scene again. This project is aim toward characterising and reproducing the actor’s vocal personality. By creating a digital double of an actor, it will be possible to create new auditory scenes, as well as dubbing a movie – preserving the actor’s voice, habits and vocal personality.
We thus aim at characterising the expressive space of a given speaker, mainly in the prosodic domain, but also for its vocal quality and articulation peculiarities.
This postdoctoral position thus consist in implementing a set of voice analysis methods in order to observe the variations of vocal source and articulation in a given set of expressive performances. Databases presenting variations of vocal effort and expressive speech are already available and labelled. They will be analysed in order to model the expressive changes in tem of variation of the vocal signal. These models will be confronted to high-level descriptions of the expressive content of the databases.

Required competences
This research requires sound knowledge of signal processing applied to speech analysis. Skills in phonetic and/or linguistic are valuable.
Teamwork skills, and deadline oriented environment is mandatory.
Candidate with signal processing, computational linguistic or acoustic phonetic background will be taken into consideration.

Research team
This position is part of the French FUI ADN T-R project. It takes place at LIMSI-CNRS (www.limsi.fr) in the Audio & Acoustic group, in collaboration with partners from the industry.
LIMSI-CNRS is a research lab located in the University Paris-Sud campus, at Orsay, south of Paris. LIMSI is internationally renown for its works on speech processing. The Audio & Acoustic group focuses more specifically on speech analysis and synthesis, real-time audio processing and expressive sound.
This project will start as soon as possible (from may 2013), and is funded by the FUI agency for a maximum of 16 months. Gross monthly salary is approximately 2500€, on the basis of CNRS post-doctoral contract.

Supervision & contact:
This postdoctoral position will be supervised by Christophe d’Alessandro and Albert Rilliard.
Applications should be send to:    

 

Top

6-23(2013-05-01) Two positions at CSTR at the University of Edinburgh Scotland UK
1.

Marie Curie Research Fellow in Speech Synthesis and Speech Perception

'Using statistical parametric speech synthesis to investigate speech perception'

The Centre for Speech Technology Research (CSTR) 
University of Edinburgh

This is a rare opportunity to hold a prestigious individual fellowship in a world-leading research group at a top-ranked University, mentored by leading researchers in the field of speech technology. Marie Curie Experienced Research Fellowships are aimed at the most talented newly-qualified postdoctoral researchers, who have the potential to become leaders in their fields. This competitively salaried fellowship offers an outstanding young scientist the opportunity to kick-start his or her independent research career in speech technology, speech science or laboratory phonetics. 

This fellowship is part of the INSPIRE Network (http://www.inspire-itn.eu) and the project that the CSTR Fellow will spearhead involves developing statistical parametric speech synthesis into a toolbox that can be used to investigate issues in speech perception and understanding. There are excellent opportunities for collaborative working and joint publication with other members of the network, and generous funding for travel to visit partner sites, and to attend conferences and workshops.

The successful candidate should have a PhD (or be near completion) in computer science, engineering, linguistics, mathematics, or a related discipline. He or she should have strong programming skills and experience with statistical parametric speech synthesis, as well as an appropriate level of ability and experience in machine learning. The fellowship is fixed term for 12 month (to start as soon as possible). CSTR is a successful and well-funded group, and so there are excellent prospects for further employment after the completion of the fellowship.

The Marie Curie programme places no restrictions on nationality: applicants can be of any nationality and currently resident in any country worldwide, provided they meet the eligibility requirements set out in the full job description (available online - URL below). 

Salary:  GBP 42,054 to GBP 46,731 plus mobility allowance 

Informal enquiries about this position should be made to Prof Simon King (Simon.King@ed.ac.uk) or Dr Cassie Mayo (catherin@inf.ed.ac.uk). 


Apply online:

https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=013062

Closing date: 10 Jun 2013



2.


An Open Position for Postdoctoral Research Associate in Speech Synthesis 

The Centre for Speech Technology Research (CSTR) 
University of Edinburgh

This post holder will contribute to our ongoing research in statistical parametric ('HMM-based') speech synthesis, working closely with Principal Investigators Dr. Junichi Yamagishi and Prof. Simon King, in addition to other CSTR researchers. The focus of this position will be to conduct research into methods for generating highly intelligible synthetic speech, for a variety of applications, in the context of three ongoing and intersecting projects in CSTR: 

The 'SALB' project concerns the generation of extremely fast, but highly intelligible, synthetic speech for blind children. This is a joint project with the Telecommunications Research Centre Vienna (FTW) in Austria, and is funded by the Austrian Federal Ministry of Science and Research. 

The 'Voice Bank' project concerns the building of synthetic speech using a very large set of recordings of amateur speakers (‘voice donors’) in order to produce personalised voices for people whose speech is disordered, due to Motor Neurone Disease. This is a joint project with the Euan MacDonald Centre for MND research, and is funded by the Medical Research Council. The main tasks will be to conduct research into automatic intelligibility assessment of disordered speech and to devise automatic methods for data selection from the large voice bank. 

The 'Simple4All' project is a large multi-site EU FP7 project led by CSTR which is developing methods for unsupervised and semi-supervised learning for speech synthesis, in order to create complete text-to-speech systems for any language or domain without relying on expensive linguistic resources, such as labelled data. The main tasks here will be to further the overall goals of the project, including contributing original research ideas. There is considerable flexibility in the research directions available within the Simple4All project and the potential for the post holder to form a network of international collaborators. 

The successful candidate should have a PhD (or be near completion) in computer science, engineering, linguistics, mathematics, or a related discipline. He or she should have strong programming skills and experience with statistical parametric speech synthesis. 

Whilst the advertised position is for 24 months (due to the particular projects that the post-holder will contribute to), CSTR is a stable, well-funded and successful group with a tradition of maintaining long-term support for ongoing lines of research and of building the careers of its research fellows. We expect to obtain further grant-funded research projects in the future. 

Informal enquiries about this position to either Dr. Junichi Yamagishi (jyamagis@inf.ed.ac.uk) or Prof. Simon King (Simon.King@ed.ac.uk). 

Apply Online: 
https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=013063

Closing date: 10 Jun 2013

 

Top

6-24(2013-05-01) Ph D or post doc at University of Karlsruhe Germany

 

A job opening is to be filled as soon as possible as part of the 'Deutschen Forschungsgemeinschaft (DFG)' sponsored project at the Department of Computer Science for the duration of up to 18 months at 50% of employment at the Cooperative State University in Karlsruhe, Germany, for a

Ph.D. Research Assistant

 

or Post-Doctoral Researcher

in the field of

Automatic Language Processing for Education

arch A

This job opening is in the field of automatic speech recognition as part of a joint research project between Karlsruhe Institute of Technology (KIT), the Cooperative State University (DHBW) and the University of Education (PH) sponsored by DFG involving speech technology for educational system. (Working language: English or German)

Description

Starting as soon as possible, we are seeking an experienced and motivated person to join our team of researchers from the above mentioned institutes. The ideal candidate will have knowledge in computational linguistics and algorithm design. Responsibilities include the use and improvement of research tools to update and optimize algorithms applied to diagnostics in children’s (German) writing using speech recognition and speech synthesis tools. For further details of this work, please refer to publications at SLaTE 2011, Interspeech 2011, and WOCCI 2012 by authors Berkling, Stüker, and Fay.

Joint and collaborative research between the partners will be very close, offering exposure to each research lab.

Candidates

:

-

Doctoral Research Candidates may apply and are welcome for joint research with their host institution.

-

Experienced (post-doctoral) Research Candidates are already in possession of a doctoral degree or have at least 3 years but less than 5 years of research experience in engineering and/or hearing research.


Requirements

-

Higher degree in speech science, linguistics, machine learning, or related field

-

Experience developing ASR applications - training, tuning, and optimization

-

Software development experience (for example: Perl, TCL, Ruby, Java, C)

-

Excellent communication skills in English

-

Willingness and ability to spend 18 months in Germany, working in a team with project partners

-

Knowledge of German linguistics, phonics, graphemes, morphology or willingness to learn

-

Strong interest in computational linguistics, morphology, phonics for German


Desirable

:

-

Interest in Education and language learning

-

Interest in Human Computer Interaction and game mechanics

-

Ability to create Graphic interfaces in multi-player applications

-

Working with Ruby on Rails

The job will allow for interesting work within a modern and well equipped environment in the heart of Karlsruhe. The salary level, depending on your circumstances, will be in line with the 13 TV-L tarrifs. KIT and the Cooperative State University are pursuing a gender equality policy. Women are therefore particularly encouraged to apply. If equally qualified, handicapped applicants will be preferred (please submit your paperwork accordingly). Non-EU candidates need to check their ability to reside in Germany.

Interested candidates, please send application (CV, certified copies of all relevant diplomas and transcripts, two letters of recommendation, proof of proficiency in English, letter of motivation (research interest, reason for applying to position) with notification of the job-number to be received on or before

April 26, 2013.

Send electronic application to:

berkling@dhbw-karlsruhe.de.

Questions about details of the job can be directed to berkling@dhbw-karlsruhe.de.

Top

6-25(2013-05-01) Thèse à Paris Tech
Attention : le dossier de candidature complet devra être soumis sur le site de l’EDITE au plus tard le 22 mai

Sujet de thèse : Traitement du contenu verbal et analyse des sentiments dans les systèmes d’interactions humain-agent

Proposé par : Chloé Clavel
Directeur de thèse: Catherine Pelachaud
Encadrant : Chloé Clavel
Unité de recherche: UMR 5141 Laboratoire Traitement et Communication de l'Information
Domaine: Département Traitement du Signal et des Images
Secteur: Traitement Automatique du Langage Naturel, Dialogue Homme-Machine
Thématique P: Signal Image SHS
Financement : bourse EDITE (voir modalités http://edite-de-paris.fr/spip/spip.php?article172)

Personnes à contacter :
chloe.clavel@telecom-paristech.fr
catherine.pelachaud@telecom-paristech.fr

**Projet
Le domaine du sentiment analysis et de l’opinion mining  est un domaine en plein
 essor avec l’arrivée en masse de données textuelles sur le web comportant des
 expressions d’opinions par les citoyens (critiques de films, débats sur les
 commentaires de forums, tweets) (El-Bèze et al. 2010)). Les recherches en
 traitement automatique des langues se mobilisent sur le développement de 
méthodes de détection d’opinion dans les textes en s’appuyant sur ces nouvelles
 ressources. La diversité des données et des applications industrielles faisant
 appel à ces méthodes multiplient les défis scientifiques à relever avec,
 notamment, la prise en compte des différents contextes d’énonciation (e.g.,
 contexte social et politique, personnalité du locuteur) et la définition du 
phénomène d’opinion à analyser en fonction du contexte applicatif. Ces méthodes
 d’analyse des sentiments dans les textes s’étendent également depuis peu à
 l’oral en passant par l’analyse des transcriptions automatiques issues de 
systèmes de reconnaissance automatique de la parole pour des problématiques 
d’indexation d’émissions radiophoniques ou de centres d’appels (Clavel et al.,
 2013), et peuvent être ainsi corrélées aux méthodes d’analyse
 acoustique/prosodique des émotions (Clavel et al., 2010).

Autre domaine scientifique en plein essor, celui des agents conversationnels
 animés (ACA) fait intervenir  des personnages virtuels intéragissant avec
 l’humain. Les ACA peuvent prendre un rôle d’assistant comme les agents
 conversationnels présents sur les sites de vente (Suignard, 2010), de tuteur 
dans le cadre des Serious Games (Chollet et al. 2012) ou encore de partenaire
 dans le cadre des jeux vidéos. Le défi scientifique majeur pour ce domaine est
 l’intégration, au sein de l’ACA, de la composante affective de l’interaction. 
Il s’agit d’une part de prendre en compte les comportements affectifs et des 
attitudes sociales de l’humain et d’autre part de les générer de façon
 pertinente.

Nous proposons pour cette thèse de travailler sur la détection des opinions et
 des sentiments dans un contexte d’interaction multimodale de l’humain avec un
 agent conversationnel animé, sujet jusqu'à maintenant peu étudié par la
 “communauté agent”. En effet, d’un côté, les ACA réagissent à des contenus
 émotionnels essentiellement non verbaux (Schröder et al., 2011) et de l’autre
 côté, les ACA “assistant” réagissent à partir des contenus verbaux informatif
 (Suignard, 2010) sans prendre en compte les opinions ou les sentiments exprimés
 par l’utilisateur. Des premières études ont été réalisées sur la reconnaissance
 de l’affect dans le langage dans un contexte d’interaction avec un agent 
 (Osherenko et al., 2009) mais celles-ci restent envisagées indépendamment de la
 stratégie de dialogue.

Les développements de la thèse s’intègreront dans la plateforme GRETA qui repose
 sur l’architecture SAIBA, une architecture globale unifiée développée par la 
“communauté agent” pour la génération de comportements multimodaux 
 (Niewiadomski et al., 2011). Greta permet de communiquer avec l’humain en
 générant chez l’agent une large palette de comportements expressifs verbaux et
 non verbaux (Bevacqua et al., 2012). Elle peut simultanément montrer des 
expressions faciales, des gestes, des regards et des mouvements de têtes. Cette 
plateforme a notamment été intégrée dans le cadre du projet SEMAINE avec le
 développement d’une architecture temps-réel d’interaction humain-agent 
(Schröder et al., 2011) qui inclut des analyses acoustiques et vidéos, un 
système de gestion du dialogue et, du côté de la synthèse, le système Text To
 Speech OpenMary et l’agent virtuel de la plateforme GRETA. A l’instar de ce 
projet, la détection d’opinions et de sentiments envisagée dans la thèse
 interviendra en entrée des modèles d’interactions multi-modaux de la
 plateforme. La stratégie de dialogue multimodale associée à ces entrées
 relatives au contenu verbal devra être définie et intégrée dans la plateforme
 GRETA.

**Enjeux
La thèse portera sur le développement conjoint de méthodes de détection des
 opinions et des sentiments et de stratégies de dialogue humain-agent. Les 
méthodes envisagées sont des méthodes hybrides mêlant apprentissage statistique
 et règles expertes.  Pour les stratégies de dialogue, le doctorant pourra
 s’appuyer sur les travaux réalisés dans le cadre du moteur de dialogue DISCO
 (Rich et al., 2012) et du moteur développé dans le projet Semaine(Schröder et
 al., 2011). Les méthodes développées pourront également s’appuyer sur des
 analyses de corpus humain-humain ou de type Magicien d’Oz (McKeown et al.,
 2012) et un protocole d’évaluation de ces méthodes devra être mis en place. En
 particulier, pour répondre à cet objectif, la thèse devra aborder les
 problématiques suivantes:
-  la définition des types d’opinions et de sentiments pertinents à considérer
 en entrée du moteur de dialogue. Il s’agira d’aller au-delà delà de la
 distinction classique entre opinions positives et opinions négatives, peu
 pertinente dans ce contexte, en s’appuyant sur les modèles issus de la
 psycholinguistique (Martin and White, 2007);
- l’identification des marqueurs lexicaux, syntaxiques, sémantiques et 
dialogiques des opinions et des sentiments;
 - la prise en compte du contexte d’énonciation: les règles implémentées
 pourront intégrer différentes fenêtres d’analyse : la phrase, le tour de parole
 et les tours de paroles antérieurs;
-  la prise en compte des problématiques temps-réel de l’interaction : des
 stratégies de dialogues seront définies en fonction des différentes fenêtres
 d’analyse afin de proposer des stratégies d’interactions à différents niveaux 
de réactivité. Par exemple, certains mots-clés pourront être utilisés comme
 déclencheurs de backchannel en temps réels et la planification des
 comportements de l’agent pourra être ajustée au fur et à mesure de l’avancement 
de l’interaction.

**Ouverture à l’international:
Ces travaux de thèse interviennent en complémentarité des travaux réalisés sur
 les interactions non verbales dans le cadre du projet européen FP7 TARDIS
 prenant comme application les Serious games dans le cas d’un entrainement à 
l’entretien d’embauche (http://tardis.lip6.fr/presentation) et des travaux
 réalisés sur le traitement des signaux sociaux dans le cadre du réseau
 d’excellence SSPNET (http://sspnet.eu/) Une collaboration avec Candy Sidner,
 professeur au département Computer Science du Worcester Polytechnic Institute 
et experte en modèles computationnels d’intéractions verbales et non verbales et
 à l’origine du moteur de dialogue DISCO (Richet et al. 2012) sera également
 mise en place.

**Références:
E. Bevacqua, E. de Sevin, S.J. Hyniewska, C. Pelachaud (2012), A listener model:
 Introducing personality traits, Journal on Multimodal User Interfaces, special 
issue Interacting ECAs, Elisabeth André, Marc Cavazza and Catherine Pelachaud 
(Guest Editors), 6:27–38, 2012.
M. Chollet, M. Ochs and C. Pelachaud (2012), Interpersonal stance recognition
 using non-verbal signals on several time windows, Workshop Affect, Compagnon
 Artificiel, Interaction, Grenoble, November 2012, pp. 19-26
C. Clavel and G. Richard (2010). Reconnaissance acoustique des émotions, 
Systèmes d’interactions émotionnelles, C. Pelachaud,  chapitre 5, 2010
C. Clavel, G. Adda, F. Cailliau, M. Garnier-Rizet, A. Cavet, G. Chapuis, S.
 Courcinous, C. Danesi, A-L. Daquo, M. Deldossi, S. Guillemin-Lanne, M. Seizou,
 P. Suignard (2013). Spontaneous Speech and Opinion Detection: Mining Call
-centre Transcripts. In Language Resources and Evaluation, avril 2013.
M. El-Bèze, A. Jackiewicz, S. Hunston, Opinions, sentiments et jugements
 d’évaluation, Revue TAL 2010, Volume 51 Numéro 3.
J.R. Martin , P.R.R. White (2007) Language of Evaluation: Appraisal in English, 
Palgrave Macmillan, Novembre 2007
G. McKeown, M. Valstar, R. Cowie, R., M. Pantic, M. Schroder (2012) The SEMAINE
 Database: Annotated Multimodal Records of Emotionally Colored Conversations
 between a Person and a Limited Agent, IEEE Transactions on Affective Computing,
 Volume: 3  , Issue: 1, Page(s): 5- 17, Jan.-March 2012
R. Niewiadomski, S. Hyniewska, C. Pelachaud (2011), Constraint-Based Model for
 Synthesis of Multimodal Sequential Expressions of Emotions, IEEE Transactions of Affective Computing, vol. 2, no. 3, 134-146, Juillet 2011.
A. Osherenko, E. Andre, T. Vogt (2009),  Affect sensing in speech: Studying fusion of linguistic and acoustic features,  International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009
C. Rich, C. L. Sidner (2012), Using Collaborative Discourse Theory to Partially Automate Dialogue Tree Authoring. IVA 2012: 327-340
M. Schröder, E. Bevacqua, R. Cowie, F. Eyben, H. Gunes, D. Heylen, M.ter Maat, G. McKeown, S. Pammi, M. Pantic, C. Pelachaud, B. Schuller, E. de Sevin, M.l Valstar, and M. Wöllmer (2011), Building Autonomous Sensitive Artificial Listeners, IEEE Transactions of Affective Computing, pp. 134-146, Octobre 2011.
P. Suignard, (2010)  NaviQuest : un outil pour naviguer dans une base de questions posées à un Agent Conversationnel, WACA, Octobre 2010

 

Top

6-26(2013-05-01) Ph D Visual articulatory biofeedback for speech therapy Grenoble France
http://www.gipsa-lab.grenoble-inp.fr/transfert/propositions/1_2013-05-01_These_arc_retour_visuel_orthophonie.pdf
       
                 

Offre de thèse                 financée. Retour articulatoire visuel pour                 l'aide à la rééducation des troubles de la parole.
              Dans la cadre de la rééducation orthophonique des troubles               de la parole, le projet vise à adapter et évaluer un               dispositif de retour visuel articulatoire permettant de               piloter en temps réel les articulateurs visibles et non               visibles d’un avatar à partir de la parole d’un patient.               Le dispositif est basé sur des modèles statistiques               construits par apprentissage automatique à partir de               données acoustiques et articulatoires. L'acquisition de               ces données ainsi que l'évaluation du système développé               s'effectuera en collaboration  avec le CHU de Lyon.
              Mots-clés : technologies vocales, avatar 3D,                machine learning, réalité augmentée,  orthophonie
             
             
PhD position. Visual articulatory biofeedback for speech                 therapy

          The project aims             at assessing and adapting a system of visual articulatory             biofeedback for speech therapy. This system is based on a 3D             avatar showing visible and non visible speech articulators             such as the tongue. Statistical mapping technique will be             used to drive the animation of the avatar from the patient's             voice. Data acquisition and system evaluation will be             conducted at the Lyon Hospital.
            Keywords : speech technology, 3D avatar,  machine             learning, augmented reality, speech therapy.
           
         
     
       
 
     
   

   

--
                                                           
       

Pierre             BADIN,             DR2 CNRS

       

Dept Parole &             Cognition (ex ICP),             GIPSA-lab, UMR 5216, CNRS – Grenoble University

       

Address : GIPSA-lab / DPC,             ENSE3,  Domaine             universitaire, 11 rue des             Mathématiques, BP 46 - 38402 Saint Martin d’Hères cedex,             France

       

Email: Pierre.Badin@gipsa-lab.grenoble-inp.fr, Web             site: http://www.gipsa-lab.inpg.fr/~pierre.badin

       

Fax:             Int + 33             (0)476.57.47.10 - Tel: Int + 33 (0)476.57.48.26

Top

6-27(2103-05-01) Open positions for Research Engineers in Speech and Language Cambridge UK

Positions description: Open positions for Research Engineers in Speech and Language Technology

The Speech Technology Group at Toshiba Cambridge Research Lab (STG-CRL), in Cambridge UK, is looking for talented individuals to lead and contribute to our ongoing research in Statistical Speech and Language Processing, in specific areas such as Speech Recognition, Statistical Spoken Dialog and Speech Synthesis.

The lab in Cambridge, in collaboration with other Toshiba groups and speech laboratories in China and in Japan, covers all aspects of speech technology and at many levels:  from basic and fundamental research to industrial development. We support our researchers in building their career by providing them with the freedom to publish their results and by investing on innovation and creation for addressing real problems in speech and language technology. STG-CRL has also strong connections with EU Universities and especially with the Cambridge University Engineering Department.

Outstanding PhD-level candidates at all levels of experience are encouraged to apply. Candidates should be highly motivated, team-oriented and should have the ability to work independently. Strong mathematical background and excellent knowledge in statistics are required. Very good programming skills are desired. Especially for the team leaders, researchers with a solid research track, senior level and international research experience will be considered.

The Toshiba Cambridge Research Lab is located in the Science Park of the university city of Cambridge.

To apply send your CV and a covering letter to stg-jobs@crl.toshiba.co.uk

Informal enquiries about the open positions to Prof. Yannis Stylianou (yannis.stylianou@crl.toshiba.co.uk)

Closing date for applications is June 30st 2013 (or until posts filled).

 
Top

6-28(2103-05-01) PhD student Learning Pronunciation Variants in a Foreign Language (full time) Radboud University Nijmegen, The Netherlands

PhD student Learning  Pronunciation Variants in a Foreign Language (full time)   

     
       

Faculty of Arts, Radboud University  Nijmegen, The Netherlands

       

Vacancy number: 23.12.13

       

Closing date: 24 May 2013

       

 

       

Responsibilities

       

As a PhD student in this project you will           investigate the interplay between exemplars and abstract           representations, which is expected to vary with processing           speed and experimental task, and to evolve during learning.           The student will investigate these issues with behavioural           experiments investigating how native speakers of Dutch learn           pronunciation variants of French words with schwa deletion.

       

Learning a foreign language implies           learning pronunciation variants of words in that language.           This includes the words' reduced pronunciation variants, which           contain fewer and weaker sounds than the words' canonical           variants (e.g. 'cpute' for English 'computer'), and which are           highly frequent in casual conversations. The learner has to           build mental representations (exemplars and possibly also           abstract lexical representations) for these variants.           Importantly, late learners will build representations that           differ significantly from native listeners' representations,           since reduction patterns in their native language will shape           their interpretation of reduction patterns in the foreign           language. The goal of this Vici project is to develop the           first, fully specified, theory of how late learners of a           foreign language build mental representations for           pronunciation variants in that language.

       

The dissertation will consist of an           introduction, at least three experimental chapters that have           been submitted to high impact international journals, and a           General Discussion.

       

 

       

What we expect from you

       

· You have or shortly expect to obtain a           Master's degree in a field related to speech processing, such           as phonetics, linguistics, psychology-, or cognitive           neuroscience;

       

· you have an excellent written and spoken           command of English;

       

· you have demonstrable knowledge of data           analysis;

       

· you preferably have knowledge of the           phonetics / phonology of French;

       

· you preferably have knowledge of the           phonetics / phonology of Dutch.

       

 

       

What we have to offer

       

We offer you:

       

- full time employment at the Faculty of           Arts, Radboud University Nijmegen

       

- in addition to the salary: an 8% holiday           allowance and an 8.3% end-of-year bonus;

       

- the starting salary will amount to €2,042           gross per month on a full-time basis; the salary will increase           to €2,612 gross per month on a full-time basis in the fourth           year (salary scale P);

       

- you will be appointed for an initial           period of 18 months, after which your performance will be           evaluated;

       

- if the evaluation is positive, the           contract will be extended by 2 years (on the basis of a           38-hour working week);

       

- you will be classified as a PhD student           (promovendus) in the Dutch university job-ranking system           (UFO).

       

 

       

Further information

       

- On the research group  Speech           Comprehension:  http://www.ru.nl/speechcomprehension

       

- On the project leader: http://mirjamernestus.nl

       

- Or contact Prof. dr. Mirjam Ernestus,           leader of the Vici project, telephone: +31 24 3612970, E-mail:           m.ernestus@let.ru.nl

       

 

       

Applications

       

It is Radboud University Nijmegen's policy           to only accept applications by e-mail. Please send your           application, including your letter of motivation, curriculum           and transcripts of your university grades and stating vacancy           number 23.12.13, to vacatures@let.ru.nl,           for the attention of Mr drs. M.J.M. van Nijnatten, before 24           May 2013.

       

 

     
     
   
Top

6-29(2013-05-01) PhD position with scholarship - Silent speech interface GIPSA-lab, Grenoble, France

Available PhD position with scholarship - Silent speech interface

GIPSA-lab, Grenoble, France

Incremental speech synthesis for a real-time silent speech interface

Context

: The design of a silent speech interface, i.e. a device allowing speech communication without the

necessity of vocalizing the sound, has recently received considerable attention from the speech research

community [1]. In the envisioned system, the speaker articulates normally but does not produce any audible

sound. Application areas are in the medical field, as an aid for larynx-cancer patients, and in the

telecommunication sector, in the form of a “silent telephone”, which could be used for confidential

communication, or in very noisy environments. In [2], we have shown that ultrasound and video imaging can be

efficiently combined to capture the articulatory movements during silent speech production; the ultrasound

transducer and the video camera are placed respectively beneath the chin and in front of the lips. At present, our

work focused mainly on the estimation of the target spectrum from the visual articulatory data (using artificial

neural network, Gaussian mixture regression and hidden Markov modeling). The other challenging issue

concerns the

estimation of acceptable prosodic patterns (i.e. the intonation of the synthetic voice) from silent

articulatory data only. To address this ill-posed problem, one solution consists of splitting the mapping process

into two consecutive steps: (1) a visual speech recognition step which estimates the most likely sequence of word

given the articulatory observations, and (2) a text-to-speech (TTS) synthesis step which generates the audio signal

from the decoded word sequence. In that case, the target prosodic pattern is derived from the linguistic structure

of the decoded sentence. The major drawback of this mapping method is that it cannot run in real-time. In fact, if

the visual speech recognition step can be done

online (i.e. words are decoded a short amount of time after they

have been pronounced), standard TTS systems need to know the entire sentence to estimate the target prosody.

This introduces a large delay between the (silent) articulation and the generation of the synthetic audio signal.

This delay prevents the communication partners from having a fluent conversation. The main goal of this PhD

project is to design a

real-time silent speech interface, in which the delay between the articulatory gesture and

the corresponding acoustic event has to be constant and as short as possible.

Goals:

The goal of this PhD project is twofold:

(1) Reducing the delay between the recognition

and the synthesis steps, by designing a new

generation of TTS system, called “

incremental

TTS system

” [3]. This system should be able to

synthesize the decoded words, with acceptable

prosody, as soon as they are provided by the

visual speech recognition system.

(2) Designing experimental paradigms in order to evaluate the system in realistic communication situations (faceto-

face, remote/telephone-like interaction, human-machine interaction). The goal is to study how a silent speaker

benefits from the acoustic feedback provided by the incremental TTS and how he/she adapts his/her own

articulation to maximize the efficiency of the communication.

Supervision:

Dr. Thomas Hueber, Dr. Gérard Bailly (CNRS/GIPSA-lab)

Duration / Salary

: 36 months (October 2013- October 2016) / ~ 1400/month minimum (net salary).

Research fields

: multimodal signal processing, machine learning, interactive systems, experimental design

Background:

Master’s or engineer’s degree in computer science, signal processing or applied mathematics.

Skills

: Good skills in mathematics (machine learning) and programming (Matlab, C, Max/MSP). Knowledge in

speech processing or computational linguistics would be appreciated.

To apply

: send your CV, transcript of records of your Master grade and a cover letter to

thomas.hueber@gipsa-lab.grenoble-inp.fr

References

:

[1] B. Denby, T. Schultz, K. Honda, T. Hueber, et al., “Silent Speech Interfaces,” Speech Communication, vol. 52, no. 4, pp.

270-287, 2010.

[2] T. Hueber, E. L. Benaroya, G. Chollet, et al., “Development of a Silent Speech Interface Driven by Ultrasound and

Optical Images of the Tongue and Lips”, Speech Communication, vol. 52, no. 4, pp. 288-300, 2010.

[3] Buschmeier H, Baumann T, Dosch B, Schlangen D, Kopp S. “Combining Incremental Language Generation and

Incremental Speech Synthesis for Adaptive Information Presentation”, in proc of the 13th Sigdial meeting, pp, 295-303,

2012.

Top

6-30(2013-05-10) Postdoctoral fellow at Toronto Rehabilitation Institute, University of Toronto

We are seeking a skilled postdoctoral fellow (PDF) whose expertise intersects automatic speech recognition (ASR) and human-robot interaction (HRI). The PDF will work with a team of internationally recognized researchers to create an automated speech-based dialogue system between computers and robotic systems, and individuals with dementia and other cognitive impairments. These systems will automatically adapt the vocabularies, language models, and acoustic models of the component ASR to data collected from individuals with Alzheimer’s disease. Moreover, this system will analyze the linguistic and acoustic features of a user’s voice to infer the user’s cognitive and linguistic abilities, and emotional state. These abilities and mental states will in turn be used to adapt a speech output system to be more tuned to the user.

 

Work will involve programming, data analysis, dissemination of results (e.g., papers and conferences), and partial supervision of graduate and undergraduate students. Some data collection may also be involved.

 

The successful applicant will have:

1)            A doctoral degree in a relevant field of computer science, electrical engineering, biomedical engineering, or a relevant discipline;

2)            Evidence of impact in research through a strong publication record in relevant venues;

3)            Evidence of strong collaborative skills, including possibly supervision of junior researchers, students, or equivalent industrial experience;

4)            Excellent interpersonal, written, and oral communication skills;

5)            A strong technical background in machine learning, natural language processing, robotics, and human-computer interaction.

 

This work will be conducted at the Toronto Rehabilitation Institute, which is affiliated with the University of Toronto.

 

--== About the Toronto Rehabilitation Institute ==--

 

One of North America’s leading rehabilitation sciences centres, Toronto Rehabilitation Institute (TRI) is revolutionizing rehabilitation by helping people overcome the challenges of disabling injury, illness ,or age-related health conditions to live active, healthier, more independent lives. It integrates innovative patient care, ground-breaking research and diverse education to build healthier communities and advance the role of rehabilitation in the health system.  TRI, along with Toronto Western, Toronto General, and Princess Margaret Hospitals, is a member of the University Health Network and is affiliated with the University of Toronto.

 

If interested, please send a brief (1-2 page) statement of purpose, an up-to-date resume, and contact information for 3 references to Alex Mihailidis (alex.milhailidis@utoronto.ca) and Frank Rudzicz (frank@cs.toronto.edu) by 31 July 2013. The position will remain open until filled.

 

 

 

Frank Rudzicz, PhD.

   Scientist, Toronto Rehabilitation Institute;

   Assistant professor, Department of Computer Science,

         University of Toronto;

   Founder and Chief Science Officer, Thotra Incorporated

>> http://www.cs.toronto.edu/~frank (personal)

>> http://spoclab.ca  (lab)

 

Top

6-31(2013-05-15) Post-doctorat dans le cadre du projet ANR DIADEMS, LABRI, Bordeaux France

ance'Offre de post-doctorat dans le cadre du projet ANR DIADEMS (Description, Indexation, Accès aux Documents Ethnomusicologiques et Sonores).

 

 

- Sujet de post-doctorat : identification / classification instrumentale

 

Durée : 12 mois

Salaire : environ 2000 €/mois

Date de début souhaitée : septembre 2013

 

La reconnaissance automatique d'instrument et la classification par famille d'instruments est un domaine de recherche actif du MIR (Music Information Retrieval) [Hei09] [Kit07] [Her06] [Ess06]. Les principales techniques reposent sur des méthodes statistiques utilisant des paramètres audio de type MFCC. Nous souhaitons ici tracer une voie nouvelle, permettant de faire le lien entre le traitement de la parole et le traitement de la musique, en considérant l'interprétation musicale comme une phrase, et l'instrument ou l'instrumentiste comme un locuteur.

 

Ce travail s'effectuera en parallèle d'une thèse en cours sur la caractérisation et l'identification de la voix chantée. Au cours de cette thèse, nous avons proposé une méthode permettant d'identifier les segments contenant de la voix chantée dans des enregistrements polyphoniques (e.g. musique 'pop'). L'objet actuel d'étude est de déterminer quels sont les paramètres du signal les plus pertinents pour caractériser différents styles de chant.

 

Une des pistes que nous souhaitons poursuivre sera d'identifier l'instrument en suivant le vibrato, de manière similaire à ce qui est proposé pour la voix chantée. En insistant sur la dimension temporelle plutôt que spectrale, nous pourrons aussi observer comment s'enchainent les respirations, les attaques sonores ou les changements timbraux utilisés par le musicien. Ce travail exploratoire nécessitera dans un premier temps d'effectuer des expérimentations sur des bases de données simples (telles que [Fri97] et [Got03]) afin de valider notre approche avant d'appliquer nos algorithmes aux données du projet DIADEMS.

 

 

- Références :

 

[Hei09] Heittola, T., Klapuri, A., Virtanen, T., 'Musical Instrument Recognition in Polyphonic Audio Using Source-Filter Model for Sound Separation,' in Proc. 10th Int. Society for Music Information Retrieval Conf. (ISMIR 2009), Kobe, Japan, 2009.

 

[Kit07] Tetsuro Kitahara, Masataka Goto, Kazunori Komatani, Tetsuya Ogata, and Hiroshi G. Okuno: 'Instrument Identification in Polyphonic Music: Feature Weighting to Minimize Influence of Sound Overlaps', EURASIP Journal on Advances in Signal Processing, Special Issue on Music Information Retrieval based on Signal Processing, Vol.2007, No.51979, pp.1--15, 2007.

 

[Her06] P. Herrera-Boyer, A. Klapuri, and M. Davy. Automatic classification of pitched musical instrument sounds. Signal Processing Methods for Music Transcription, pages 163–200. Springer, 2006.

 

[Ess06] S. Essid, G. Richard, and David.B. Instrument recognition in polyphonic music based on automatic taxonomies. IEEE Transactions on Audio, Speech & Language Processing, 14(1):68–80, 2006.

 

[Fri97] L. Fritts, “Musical Instrument Samples,” Univ. Iowa Electronic Music Studios, 1997–. [Online]. Available: http://theremin.music.uiowa.edu/MIS.html

 

[Got03] Goto M, Hashiguchi H, Nishimura T, Oka R. RWC music database: Music genre database and musical instrument sound database. ISMIR. 2003:229–230.

 

---------

 

Description du projet DIADEMS (Partenaires : LaBRI, IRIT, LESC, Parisson, LIMSI, MNHN, LAM-IJLRA)

 

Le Laboratoire d'Ethnologie et de Sociologie Comparative (LESC) comprenant le Centre de Recherche en Ethnomusicologie (CREM) et le centre d'Enseignement et de Recherche en Ethnologie Amérindienne (EREA) ainsi que le Laboratoire d'Eco-anthropologie du Muséum National d'Histoire Naturelle (MNHN) sont confrontés à la nécessité d'indexer les fonds sonores qu'ils gèrent et de faire un repérage des contenus, travail long, fastidieux et coûteux.

 

Lors de l'Ecole d'Été interdisciplinaire Sciences et Voix 2010 organisée par le CNRS, une convergence d'intérêts s'est dégagée entre les acousticiens, les ethnomusicologues et les informaticiens : il existe aujourd'hui des outils d'analyse avancés du son développés par les spécialistes en indexation qui permettent de faciliter le repérage, l'accès et l'indexation des contenus.

 

Le contexte du projet est l'indexation et l'amélioration de l'accès aux fonds d'archives sonores du LESC : le fonds du CREM et celui d'ethnolinguistique de l'EREA (« chanté-parlé » Maya, ainsi que celui du MNHN (musique traditionnelle africaine). Il s'inscrit dans la continuité d'une réflexion entreprise en 2007 pour l'accès aux données sonores de la Recherche : aucune application n'existant en « open source » sur le marché, le CREM-LESC, le LAM et la Phonothèque de la MMSH d'Aix-en Provence ont étudié la conception d'un outil innovant et collaboratif qui répond à des besoins « métier » liés à la temporalité du document, tout en étant adapté à des exigences du secteur de la recherche. Avec le soutien financier du Très Grand Equipement (TGE) ADONIS du CNRS et du Ministère de la Culture, la plateforme Telemeta développée par la société PARISSON a été mise en ligne en mai 2011 : http://archives.crem-cnrs.fr . Sur cette plateforme, des outils d'analyse élémentaires de traitement de signal sont d'ores et déjà disponibles.

 

Cependant, il est nécessaire de disposer d'un ensemble d'outils avancés et innovants pour une aide à l'indexation automatique ou semi-automatique de ces données sonores, issues d'enregistrements parfois longs, au contenu très hétérogène et d'une qualité variée. L'objectif du projet DIADEMS est de fournir certains des outils, de les intégrer dans Telemeta, en répondant aux besoins des usagers. Il s'en suit une complémentarité des objectifs scientifiques des différents partenaires : Les fournisseurs de technologies, l'IRIT, le LIMSI, le LaBRI et le LAM auront à :

- Fournir des technologies existantes telles que la détection de parole, de musique, la structuration en locuteurs. Ces outils visent à extraire des segments homogènes d'intérêt pour l'usager. Ces systèmes auront à faire face à la diversité des bases qu'il est proposé d'étudier dans ce projet ; leur hétérogénéité est liée aux conditions d'enregistrement, au genre et à la nature des documents, à leur origine géographique. Il faudra adapter ces systèmes dits « état de l'art » aux besoins des usagers.

- Proposer des outils innovants d'exploration du contenu de segments homogènes. Les travaux sur l'opposition voix parlée-déclamée-chantée, le chant, les tours de chant, la recherche de similarité musicale ne sont pas matures. Un véritable travail de recherche reste à faire et avoir à sa disposition des musicologues et des ethnomusicologues est un atout positif. Les ethnomusicologues, ethnolinguistes, acousticiens spécialistes de la voix et les documentalistes spécialisés vont jouer un rôle important dans le projet en tant que futurs utilisateurs des outils d'indexation : Les documentalistes doivent s'approprier les outils et apporter leur expérience afin d'adapter ces outils à leur besoin en indexation.

 

Un échange important doit se réaliser entre celui qui fournit l'outil, celui qui l'intègre et celui qui l'utilise. L'effort doit être porté sur la visualisation des résultats avec pour fin une aide forte à l'indexation en la rendant de fait semi-automatique Pour l'ethnomusicologue et le musicologue, l'objectif va au-delà de l'indexation. Il s'agit au travers d'aller et retour entre lui et les concepteurs de technologies de cibler les outils pertinents d'extraction d'information.

 

Jean-Luc Rouas LABRI 351, cours de la Libération 33405 Talence cedex France (+33) 5 40 00 35 08

 

 

Top

6-32(2013-05-16) Ph D Avignon France

Reconnaissance du locuteur en milieu bruité

Nous avons atteint ces dernières années de très bonnes performances en reconnaissance du locuteur. Et ce, malgré la présence de la variabilité session. En effet, le variabilité session est prise en compte lors du scoring en utilisant une matrice de covariance modélisant cette dernière. Ce processus est effectué dans l'espace des i-vectors [1]. Le concept des i-vectors est devenu un standard en reconnaissance du locuteur.

Dans la dernière évaluation internationale NIST 2012, nous avons été confrontés à une nouvelle difficulté qui est le bruit additif [2], c'est à dire le bruit ambiant. La recherche pour réduire l'impact du bruit dans les systèmes de reconnaissance du locuteur est motivée en grande partie par le besoin d'appliquer les technologies de reconnaissance du locuteur sur des appareils portables ou sur l'Internet. Alors que les technologie promet un niveau supplémentaire de sécurité biométrique pour protéger l'utilisateur, la mise en œuvre pratique de ces systèmes doit faire face à de nombreux défis. Un des plus importants défis à surmonter est le bruit environnemental. En raison de la mobilité de ces systèmes, les sources de bruit peuvent être très variables dans le temps et potentiellement inconnus.

Nous proposons de travailler dans ce cadre : proposer des stratégies permettant de compenser l'effet du bruit additif, ces stratégies peuvent intervenir à différents niveaux du processus de reconnaissance: au niveau du signal, au niveau des modèles acoustiques, au niveau des i-vectors et au niveau du scoring....) .

    • Débruitage des signaux

    • Effet du bruit sur la VAD (Voice activity detection)

    • Bruitage des modèles

    • Intégration des caractéristiques statistiques du bruit dans la phase du scoring

Dans une deuxième partie du travail, nous proposons de nous mettre dans les meilleures conditions pour que le système soit le plus robuste au bruit. Par exemple, le choix de l'énoncé à prononcer par le locuteur peut avoir de l'influence sur les performances du système [3]. Faut a t-il avoir avoir le même énoncé pour tous les locuteurs, ou au contraire chaque locuteur se distingue des autres locuteur sur un ensemble bien précis d'unités acoustiques. Dans ce dernier cas, il faut trouver une stratégie, qui permet de déterminer l'ensemble des unités acoustiques qui différencient le plus possible un locuteur (des autres locuteurs). D'autres stratégies de robustesse au bruit doivent être proposées et étudiées dans le cadre de cette thèse. Une des pistes à explorer est l'utilisation de la théorie des caractéristiques manquantes (missing-feature theory), qui a été utilisée dans le domaine du traitement de la parole [4][5][6].

Les systèmes de reconnaissance du locuteur de l'état de l'art sont fondamentalement basés sur l'utilisation de l'UBM (Universal Backgroud Model), il s'agit d'un modèle trop simple pour le traitement et la modélisation de la parole. Dans le cas de la reconnaissance en milieu bruité, la tâche devient plus complexe, il est donc légitime de se reposer la question sur l'adéquation de ce modèle pour cette tâche. Nous proposons d'adapter une approche utilisant des HMM (ou autre modèle) à cette tâche tout en profitant des avancées récemment proposées ( Factor analysis, I-vectors, …).

[1] Bousquet Pierre-Michel, Matrouf Driss and Bonastre Jean-François, «Intersession compensation and scoring methods in the i-vectors space for speaker recognition » Interspeech 2011, Florence.

[2] Miranti Indar Mandasari, Mitchell McLaren and David A. van Leeuwen, « The Effect of noise on modern automatic speaker recognition systems » , ICASSP 2012.

[3] Anthony Larcher, Pierre-Michel Bousquet, Kong-Aik Lee, Driss Matrouf, Haizhou Li, Jean-François Bonastre, « I-vectors in the context of phonetically-constrained short utterances for speaker verification. » ICASSP 2012: 4773-4776.

[4] M.P. Cooke, P.G. Green, L. Josifovski, and A. Vizinho, « Robust ASR with

unreliable data and minimal assumptions, » in Proc., Robust’99, 1999

[5] M.P. Cooke, P.G. Green, L. Josifovski, and A. Vizinho, « Robust Automatic Speech Recognition with missing and unreliable acoustic data, » Speech Communication,, 2000.

[6] B. Raj, M.L. Seltzer, and R.M. Stern, « Reconstruction of missing features for robust speech recognition, » Speech Communication, 2004.

 

 

 

Top

6-33(2013-06-03) 2 Ph D's Université d'Avignon
Le labex Brain and Language Research Institute (BLRI) financera à la rentrée prochaine deux sujets 
de thèse dont l'un des sujets pourra être celui proposé ci-dessous (en fonction des candidatures
 retenues).
Calendrier :. date limite des candidatures : 10 juin. 
auditions des candidats retenus : 24 juin
Bourse : 1 684.93€ brut mensuel (1 368€ nets)
Dossier des candidats :. CV détaillé. Notes. Motivation et/ou projet scientifique correspondant au sujet
Contact scientifique : Corinne Fredouille et Christine Meunier
Contact administratif : nadera.bureau@blri.fr
Description du sujet : 
 
 

Titre : Détection de zones de déviance dans la parole pathologique : apport du traitement automatique face à l'expertise humaine

Superviseurs : Corinne Fredouille, Christine Meunier

Laboratoire d'accueil : Laboratoire Informatique d'Avignon (collaboration avec le Laboratoire Parole et Langage –Aix-en-Provence)

Discipline et école doctorale : Informatique, école doctorale ED536 de l'Université d'Avignon

Calendrier :

. date limite des candidatures : 10 juin

. auditions des candidats retenus : 24 juin

Bourse : 1 684.93brut mensuel (1 368nets)

Description scientifique

Si la définition de l'étendue de la variabilité en parole normale est une question fondamentale pour les théories

linguistiques actuelles, une façon d'aborder ses limites est d'essayer de déterminer ses frontières par le biais de la

variation pathologique. Comme le soulignent Duffy et Kent, 2001 « Science often takes advantages of nature’s

accidents to learn the principles of a process ». Sur ce principe, la connaissance de la parole pathologique s'appuyant sur

la compréhension des phénomènes d'altération « observables » dans la production de parole de patients atteints de

troubles de la parole devient une nécessité.

La parole dysarthrique correspond à une altération de la commande motrice d’origine centrale ou périphérique des

gestes de la parole. Des variations importantes existent dans la parole dysarthrique en relation avec un déficit de

l’exécution temporo-spatiale des mouvements de la parole et qui peuvent affecter différents niveaux de production

(respiratoire, laryngé et supralaryngé). La grande majorité des travaux ayant porté sur l'étude de la parole dysarthrique

repose sur des analyses perceptives. La raison principale tient dans le fait qu'un patient dysarthrique est dysarthrique

parce qu'il « s'entend/a l'air » dysarthrique. Les travaux les plus connus au niveau international sont ceux de Darley et

al., 1975. Ils ont conduit à l'élaboration d'une organisation des dysarthries en 6 classes (complétée par deux classes

supplémentaires par Duffy, 1995) sur la base de clusters physiopathologiques définis à partir de la concomitance de

caractéristiques les plus déviantes perçues par un jury d'écoute. L’hypothèse sous-jacente à la construction de ces

clusters est qu’un ensemble de paramètres perturbés simultanément, mis en relation avec l’atteinte neurologique,

reflèterait un processus physiopathologique particulier. Si cette classification reste d'actualité encore aujourd'hui pour

évaluer notamment la parole dysarthrique en pratique clinique, elle reste néanmoins sujette à controverses pour deux

raisons principales : la subjectivité des évaluations perceptives d'une manière générale et la difficulté pour un être

humain, même expérimenté, à distinguer et à juger perceptivement les multiples dimensions à prendre en compte dans

l'évaluation de la parole dysarthrique. En conséquence, différents travaux ont été menés à partir des années 80 à

aujourd'hui dans l'objectif de combiner ces analyses perceptives à des méthodes plus objectives et quantitatives telles

que les analyses instrumentales basées sur des mesures acoustiques ou physiologiques (revue de la littérature dans Kay,

2012). Si les analyses instrumentales peuvent s'appuyer sur des traitements semi- voire entièrement automatiques,

l'analyse acoustique fine nécessaire pour comprendre les phénomènes déviants inhérents à la dysarthrie dans le signal de

parole demeure encore très coûteuse en temps par un expert humain. Dans ce contexte, une grande part des études

présentes dans la littérature repose soit sur un nombre très restreint de patients ou sur une pathologie bien ciblée.

Pourtant, la grande variabilité des phénomènes déviants observés dans la parole dysarthrique en fonction de la

pathologie du patient, de l'avancement de la maladie ou de la sévérité de la dysarthrie requiert d'analyser une large

population de patients.

L'objectif de cette thèse est d'étudier comment les outils du traitement automatique de la parole pourraient permettre de

traiter de larges populations de patients dysarthriques et de focaliser l'attention des experts humains sur des zones de

déviance bien identifiées du signal en vue d'analyses plus fines. Ces travaux reposeront notamment sur le système de

transcription automatique du LIA et ses activités de recherche autour des mesures de qualité des transcriptions

(Lecouteux, 2008 et Senay, 2011). La granularité de la détection des zones de déviance – ici potentiellement le mot ou

la séquence de mots – sera dans un second temps affiner par des outils travaillant à des niveaux inférieurs allant

jusqu'au phonème (Fredouille, 2011).

Ces travaux devront tenter de répondre à différentes questions :

Face à la variabilité des phénomènes de déviance observés dans la parole dysarthrique et répertoriés dans la

littérature, quels sont ceux qu'un système de détection automatique est capable de déceler ?

Est ce qu'un système automatique est capable de mettre en évidence les mêmes phénomènes de déviance qu'un

expert humain lors d'une évaluation perceptive ?

Les déviances détectées par le système automatique sont-elles pertinentes pour les phonéticiens ?

Est-il possible de mettre en relation les déviances détectées avec la physiopathologie du patient (ex : indices

hypokinéthiques pour la maladie de Parkinson, indices paralytiques pour la SLA, …) ?

Les travaux autour du système de transcription automatique du LIA devraient également ouvrir des perspectives sur la

mise en place d'un système de mesures objectives de l'intelligibilité des patients dysarthriques.

Ces travaux de thèse seront réalisés dans le cadre d'une collaboration étroite entre le LIA (Corinne Fredouille) pour son

expertise autour des systèmes automatiques, le LPL (Christine Meunier et Alain Ghio) pour son expertise sur les

analyses acoustico-phonétiques et les évaluations perceptives, les hôpitaux de la Timone (Dr Danièle Robert) et des

Pays d'Aix (François Viallet) pour leur expertise clinique. Ils seront basés sur le corpus de patients dysarthriques

élaborés dans le cadre du projet ANR DesPhoAPady (2009-2012 – Fougeron, 2010). Ce corpus présente un large panel

de patients souffrant de différentes pathologies (maladie de Parkinson, Sclérose Latérale Amiotrophique, syndrôme

cérébelleux) et différents niveaux de sévérité de dysarthrie.

Références :

J. R. Duffy, R. D. Kent, « Darley's contributions to the understanding, differential diagnosis, and scientific study of the

dysarthrias », Aphasiology 15(3):275 – 289, 2001.

F. L. Darley, A. E. Aronson, J. R. Brown, « Motor Speech Disorders », Philadelphia: W.B. Saunders, 1975.

J. R. Duffy, « Motor speech disorders : substrates, differential diagnosis and management », Motsby- Yearbook, St

Louis, 1e édition, 1995.

T. S. Kay, « Spectral analysis of stop consonants in individuals with dysarthria secondary to stroke », PhD thesis,

Department of Communication Sciences and Disorders, Faculty of the Louisiana State University and Agricultural and

Mechanical College, USA, 2012.

B. Lecouteux, « Reconnaissance automatique de la parole guidée par des transcriptions a priori », Thèse de doctorat,

Université d'Avignon et des Pays Vaucluse, 2008.

G. Senay, « Approches semi-automatiques pour la recherche d’information dans les documents audio », Thèse de

doctorat, Université d'Avignon et des Pays Vaucluse, 2011.

C. Fredouille, G. Pouchoulin, « Automatic detection of abnormal zones in pathological speech », International Congress

of Phonetic Sciences (ICPHs'11), Hong Kong, 17-21 Août 2011.

C. Fougeron et al., « Developing an acoustic-phonetic characterization of dysarthric speech in French », LREC'10

(international conference on Language Resources and Evaluation), Malte, Mai 2010.

Dossier des candidats :

. CV détaillé

. Notes

. Motivation et/ou projet scientifique correspondant au sujet

Contact scientifique : Corinne Fredouille et Christine Meunier

Contact administratif : nadera.bureau@blri.fr

 

 

Title: Detection of deviant zones in pathological speech : contribution of the automatic speech processing against the

Human expertise

Supervisor : Corinne Fredouille, Christine Meunier

Host laboratory : Laboratoire Informatique d'Avignon (collaboration with the Laboratoire Parole et Langage – Aix-en-Provence)

Field and doctoral school : computer sciences, doctoral school ED536 of the University of Avignon

Date :

. deadline for the application : 10th of june

. auditions of chosen candidates : 24th of june

Grant : 1 684.93montly gross (1 368net)

Scientific description

If the definition of the variability range in normal speech is a key issue for the current linguistic theories, a way of

dealing with its limits is to attempt to determine its frontiers through pathological variation. As reported by Duffy and

Kent, 2001 « Science often takes advantages of nature’s accidents to learn the principles of a process ». Based on this,

the knowledge of the pathological speech, based on the understanding of alteration phenomena, « observable » on the

speech production of patients suffering of speech disorders becomes a necessity.

Dysarthria is a group of speech disorders resulting from neurological impairments of speech motor control. Substantial

variations occur in dysarthric speech due to a deficit in the spatio-temporal execution of speech movements that affects

different levels of speech production (respiratory, laryngeal and supralaryngeal). The vast majority of research work

dedicated to the study of the dysarthric speech relies on perceptual analyses. The main reason is that a dysarthric patient

is dysarthric because he/she sounds dysarthric. The most known study, at international level, was done by Darley et al.,

1975. This work leads to organize dysarthria into 6 classes (completed with 2 additional classes by Duffy, 1995) on the

basis of physiopathological clusters defined from the co-occurrences of the most deviant features perceived by a

perceptual jury. The hypothesis underlying the building of these clusters is that a set of simultaneously disturbed

features, connected with the neurological injuries, should reflect a typical physiopathological process.

If this classification is still used nowadays to evaluate dysarthric speech in clinical practice notably, it remains

controversal for a couple of reasons : the subjectivity of perceptual evaluation and the difficulty for a Human being,

even with a high expertise, of distinguishing and assessing perceptually the multiple dimensions to take into account

when dealing with dysarthric speech. Consequently, different research work has been conducted since the 1980s until

now which aims at combining these perceptual analyses with more objective and quantitative approaches such as the

instrumental analyses based on acoustic or physiological measure (review of the literature can be found in Kay, 2012).

Contrary to the instrumental analyses which can rely on some semi- or full-automatic process, in-depth acoustic

analysis of speech necessary to understand the deviant phenomena related to dysarthria still remains very timeconsuming

for a Human expert. Based on this, a signifiant proportion of studies in the literature are conducted on a

limited number of patients or on a focused pathology. However, the large variability of deviant phenomena observed in

dysarthric speech according to the patient’s pathology, the stage of the disease, or the dysarthria severity require the

analysis of a large patient population.

The aim of this thesis is to study how the automatic speech processing tools could permit to treat large populations of

dysarthric patients and to focus Human experts’ attention on speech zones well identified as deviant for further in-depth

analyses. This work will rely on the automatic speech transcription developed at the LIA and its activities on the quality

measure of transcriptions (Lecouteux, 2008 et Senay, 2011). The granularity of the deviant zone detection – here the

word or set of words – will be refined, in a second step, by applying existing detection tools working at lower levels like

the phoneme (Fredouille, 2011).

This work will attempt to answer the following key issues :

Given the variability of deviant phenomena observed on dysarthric speech and reported in literature, which

ones is an automatic detection system able to capture ?

Is an automatic system able to highlight the same deviant phenomena as a Human expert will detect

perceptually ?

Are deviant speech zones detected by an automatic system relevant for a phonetician ?

Does a correlation between the type of deviant phenomena detected and the patient’s physiopathology

exist (e.g : hypokinetic feature for the Parkinson disease, paralytic features for ALS, …) ?

Studies relating to the automatic speech transcription should open up new perspectives on the implementation of an

objective system dedicated to the evaluation of the dysarthric patient’s intelligibility.

This thesis work will be carried out within a close collaboration between the LIA (Corinne Fredouille) for her expertise

on the automatic system dedicated to speech processing, the LPL (Christine Meunier and Alain Ghio) for their expertise

on acoustic-phonetic analyses and perceptual evaluations, both the hospitals of La Timone (Dr Danièle Robert) and desPays d’Aix (Pr. François Viallet) for their clinical expertise. It will be based on the dysarthric patient corpus designed

for the ANR DesPhoAPady project (2009-2012 – Fougeron, 2010). This corpus includes a large population of patients

suffering from various pathologies (Parkinson disease, ALS, cerebelar syndrom, …) and different levels of dysarthria

severity.

Bibliography :

J. R. Duffy, R. D. Kent, « Darley's contributions to the understanding, differential diagnosis, and scientific study of the

dysarthrias », Aphasiology 15(3):275 – 289, 2001.

F. L. Darley, A. E. Aronson, J. R. Brown, « Motor Speech Disorders », Philadelphia: W.B. Saunders, 1975.

J. R. Duffy, « Motor speech disorders : substrates, differential diagnosis and management », Motsby- Yearbook, St

Louis, 1e édition, 1995.

T. S. Kay, « Spectral analysis of stop consonants in individuals with dysarthria secondary to stroke », PhD thesis,

Department of Communication Sciences and Disorders, Faculty of the Louisiana State University and Agricultural and

Mechanical College, USA, 2012.

B. Lecouteux, « Reconnaissance automatique de la parole guidée par des transcriptions a priori », Thèse de doctorat,

Université d'Avignon et des Pays Vaucluse, 2008.

G. Senay, « Approches semi-automatiques pour la recherche d’information dans les documents audio », Thèse de

doctorat, Université d'Avignon et des Pays Vaucluse, 2011.

C. Fredouille, G. Pouchoulin, « Automatic detection of abnormal zones in pathological speech », International Congress

of Phonetic Sciences (ICPHs'11), Hong Kong, 17-21 Août 2011.

C. Fougeron et al., « Developing an acoustic-phonetic characterization of dysarthric speech in French », LREC'10

(international conference on Language Resources and Evaluation), Malte, Mai 2010.

Candidate application form :

. detailed CV

. marks

. Motivation and/or scientific project related to the topic

Scientific Contact : Corinne Fredouille et Christine Meunier

Administrative Contact : nadera.bureau@blri.fr

Top

6-34(2013-05-16) Internships in Natural Language Processing and Machine Translation, Dublin City University, Ireland

At the Centre for Next Generation Localisation (CNGL) in Dublin, Ireland, we have a number of internships available covering a wide range of topics in Natural Language Processing and Machine Translation based at our Dublin City University site.  The internships are available for both basic research and more applied research projects (including development-focused work).
Candidates are required to be registered as MSc or PhD students (by research) in their home universities while carrying out their internship in Dublin and need to provide written confirmation of this from their home institute. Please find the internship advertisement attached.
Details of a number of specific internships can be found at: http://www.cngl.ie/outreach/graduate-programme/postgradinternships/
Closing date: 31st May 2013
For any informal enquiries please contact: CNGL Education and Outreach
Dr. Cara Greene, CNGL, DCU Phone: +353 (0)1 7006704 E-mail: cgreene (AT) computing.dcu.ie
Web: http://www.cngl.ie
Application Procedure:
For formal applications, please download an application form from the link below and send it to cgreene (AT) computing.dcu.ie by Friday 31st May 2013. http://www.cngl.ie/outreach/graduate-programme/postgradinternships/

Top

6-35(2013-05-17) Speech Technology Researcher/Developer (main focus ASR) , Liguwerk GmbH Dresden Germany

vacant position as a

Speech Technology Researcher/Developer (main focus ASR)

in our research and development group in Dresden. We are a young
company located in Dresden dealing with speech technology products and
applicatuions, signal processing solutions and pattern recognition
techniques. Although we develop a new group of products our work is
closely related to scientific topics and we collaborate with several
german and international universities.

Job descriptions (in German) may be found in

http://www.linguwerk.de/sites/default/files
/1305_Stellenangebot_Linguwerk_0.pdf

linguwerk.de/sites/default/files/1305_Stellenangebot_Linguwerk_0.pdf

Unfortunately our webpage is currently under construction and only
available in German at the moment. Nevertheless English speaking
applicants are very welcome, all our group members speak English
fluently.

Our team consists of speech technology scientists and engineers,
education sciense specialists, creatives, and people who love
scientific challenges.

If you are interested to join and enrich our team, don't hesitate to
contact as and ask for further information. Also contact us for
information in English.

Note: The employer for this job will be the University in Dresden, but
you are a member of our joined research group within our Lab.

Dr.-Ing. Rico Petrick
CEO / Geschäftsleitung
Linguwerk GmbH          Office: +49 351 6533-3807
Schnorrstraße 70        Fax:    +49 351 6533-6965
01069 Dresden           Mobile: +49 174 943 8817
Germany                 E-Mail: rico.petrick@linguwerk.de
                        Web:    www.linguwerk.de

 

Top

6-36(2013-06-05) Specialist in speech processing, CUI/University of Geneva

The CUI/University of Geneva seeks a qualified candidate for one


           Specialist in speech processing


for a period of 6 months (class 13; 5988.-). Depending on hiring rate (100% to 50%), the contract will be extended up to 12 months.


In the context of a newly granted Swiss National Science Foundation Agora project, the CUI will collaborate with the Phonetischen Laboratorium of Universität Zurich (UZH) on 'Swiss VoiceApp - Your voice. Your identity.” The project will combine speech recognition and processing as well as linguistic knowledge to provide a smartphone application for Swiss German dialect recognition and some information on the user’s voice.


The main tasks of successful candidate will be the development of speech recognition engine based on speech corpora of dialectal variants. Knowledge of speech software, namely HTK and Praat, are highly recommended.


Applicant should possess a master’s degree or an engineer diploma in computational linguistics, or in computer science but with strong knowledge of speech processing. Given the focus on Swiss German dialects, preference will be given to speaker of German.


The start date is expected to be during autumn/winter 2013; the position will remain open until filled.


Contact

Please send (1) your CV, (2) a copy of your degree(s), (3) a short letter explaining that your skills and your interests fit in the project, to Jean-Philippe Goldman, before July 31th 2013, preferably electronically or by mail at the address below:



Jean-Philippe Goldman

CUI / UNI-Rondeau
CH-1227 Carouge, Switzerland
Jean-Philippe.Goldman@unige.ch

========================================================================000


Le CUI / Université de Genève cherche à engager un/une


           Spécialiste en traitement de la parole


pour un emploi temporaire d’une durée de 6 mois, (classe 13 ; 5988.-).Selon le taux d’engagement (de 100% à 50%) le contrat peut être étendu à 12 mois.


Dans le contexte d’un projet récemment financé par le Fonds National de la Recherche, le CUI collaborera avec le Phonetisches Laboratorium de l’Université Zurich (UZH) sur le projet 'Swiss VoiceApp - Your voice. Your identity.” Le projet combinera des techniques de reconnaissance de la parole et des connaissances linguistiques pour produire une application pour smartphone visant à l’identification de dialectes suisses alémaniques et de fournir à l’utilisateur des informations sur sa propre voix.


La tâche principale du candidat sélectionné sera le développement d’un système de reconnaissance de la parole basé sur des corpus de variantes dialectales. La connaissance de logiciels de parole tels que HTK est essentielle.


Le candidat devra posséder un MA ou un diplôme d’ingénieur en linguistique computationnelle, ou en informatique mais avec de bonnes connaissances en traitement de la parole. Étant donné l’orientation germanophone du projet, une préférence sera donnée à un locuteur de l’allemand. Le contrat pourra commencer en automne ou hiver 2013.


Contact : Veuillez envoyer  (1) votre CV, (2) une copie de vos diplômes, (3) une lettre sur vos compétences et intérêts en rapport avec le projet,  à Jean-Philippe Goldman, avant le 31 juillet 2013, de préférence par voie électronique, ou par courrier à l’adresse suivante:


Jean-Philippe Goldman

CUI / UNI-Rondeau
CH-1227 Carouge, Switzerland
Jean-Philippe.Goldman@unige.ch

 
Top

6-37(2013-06-12) Specialist in speech processing- CUI/University of Geneva Switzerland

The CUI/University of Geneva seeks a qualified candidate for one
          Specialist in speech processing
for a period of 6 months (class 13; 5988.-). Depending on hiring rate (100% to 50%), the contract will be extended up to 12 months.
In the context of a newly granted Swiss National Science Foundation Agora project, the CUI will collaborate with the Phonetischen Laboratorium of Universität Zurich (UZH) on 'Swiss VoiceApp - Your voice. Your identity.” The project will combine speech recognition and processing as well as linguistic knowledge to provide a smartphone application for Swiss German dialect recognition and some information on the user’s voice.
The main task of successful candidate will be the development of speech recognition engine based on speech corpora of dialectal variants. Knowledge of speech software, namely HTK and Praat, are highly recommended.
Applicant should possess a master’s degree or an engineer diploma in computational linguistics, or in computer science but with strong knowledge of speech processing. Given the focus on Swiss German dialects, preference will be given to speaker of German.
The start date is expected to be during autumn/winter 2013; the position will remain open until filled.
Contact Please send (1) your CV, (2) a copy of your degree(s), (3) a short letter explaining that your skills and your interests fit in the project, to Jean-Philippe Goldman, before July 31th 2013, preferably electronically or by mail at the address below:
Jean-Philippe Goldman CUI / UNI-Rondeau CH-1227 Carouge, Switzerland

Jean-Philippe.Goldman@unige.ch
 

Top

6-38(2013-06-16) Faculty position at CSLP, Johns Hopkins University Baltimore

 CLSP Faculty Position
    May 31, 2013
   
    The Center for Language and Speech Processing at the Johns Hopkins     University seeks applicants for a tenure-track or tenured faculty     member in speech and language processing. We especially welcome     candidates who could strengthen our institutional drive for     inter-disciplinary research. Rank will be dependent on the     experience and accomplishments of the candidate.
   
    Applicants must have a Ph.D. in a relevant discipline and will be     expected to establish a strong, independent, multidisciplinary,     internationally recognized research program. A commitment to quality     teaching at the undergraduate and graduate levels is required. We     are committed to building a diverse educational environment; women     and minorities are especially encouraged to apply.
   
    Primary appointments will be in the academic Department most     appropriate for the candidate within the G.W.C. Whiting School of     Engineering such as Electrical and Computer Engineering, Computer     Science or Biomedical Engineering.
   
    Applicants should apply using the online application here:     https://academicjobsonline.org/ajo/jobs/2383
   
    Applications will be accepted until the position is filled.

Top

6-39(2013-06-18) Thèse financée au GIPSA lab Grenoble

Le Département Parole et Cognition           du GIPSA-lab (www.gipsa-lab.inpg.fr/recherche/departement-parole-cognition.php)
        propose une bourse de doctorat financée par la région         Rhône-Alpes dans le cadre du programme ARC2 (http://www.arc2-q2v.rhonealpes.fr/)
      sur le thème des troubles             du développement de la parole chez les enfants sourds             porteurs d’implants cochléaires : évaluation, diagnostic et             pistes pour la remédiation.   

Le
          financement est prévu pour une durée de 3 ans, et démarrera à           la rentrée universitaire 2013. Le (la) candidat(e) retenu(e)           sera affilié(e) à l’école doctorale LLSH de l’université           Stendhal de Grenoble (http://w3.u-grenoble3.fr/ecole_doctoraleLLSH/).

   

Compétences recherchées :

   

Ce
          sujet est proposé aux étudiant(e)s titulaires d’un Master 2 R,           issu(e)s d’une formation en sciences du langage ou           orthophonie, possédant une expérience en phonétique ou           psychologie expérimentales. Il est souhaitable, mais non           requis, d’être à l’aise avec le logiciel Matlab et les outils
          d’analyse du signal audio classiques (tels que Praat), ainsi que d’avoir           des compétences en statistiques. Les candidatures de           professionnels de santé en lien avec les troubles du langage           et/ou de l’audition (orthophonistes, audioprothésistes,           O.R.L., etc.) sont bienvenues. Les candidats devront avoir une           bonne maîtrise de la langue française (interaction avec des           enfants).

   

Encadrement :           Ce sujet sera co-dirigé par Anne Vilain du Département Parole           et Cognition du laboratoire GIPSA-lab et Michel Hoen de           l’équipe DYCOG (Dynamique Cérébrale et Cognition) du Centre de           recherche en Neurosciences de Lyon (U1028/UMR5292), en           collaboration avec le Laboratoire de Psychologie et           NeuroCognition de Grenoble.

   

 Les candidatures               (CV détaillé + lettre de motivation sont à envoyer à Anne.Vilain@gipsa-lab.grenoble-inp.fr)               avant le 25 juin 2013. Elles donneront lieu à une               pré-sélection avant entretien.

   

 

   

     

Détail du sujet :

   

   

En
        2010, plus de 150 000 personnes dans le monde étaient équipées         d’un implant cochléaire, et la moitié d’entre elles étaient des         enfants. Pourtant, les difficultés en parole des enfants sourds         pré-linguaux porteurs d’implant cochléaire sont peu étudiées.         Les rares études disponibles révèlent l’existence de difficultés         de production et de perception de certains sons de parole, même         après plusieurs années d’usage. Or il a été montré que les         troubles de la parole peuvent induire des difficultés dans les         apprentissages scolaires, qui peuvent avoir un retentissement         affectif et social pour l'enfant. La rééducation orthophonique         ciblée des enfants porteurs d’implants cochléaires représente         donc un enjeu important en termes de qualité de vie à court et         long terme.

   

Le
        présent projet vise à (i) évaluer les difficultés de production         de parole persistantes chez un sujet sourd pré-lingual après         plusieurs années d’implantation cochléaire, et à (ii) mettre en         rapport ces difficultés avec les capacités des sujets à         percevoir les contrastes phonétiques, afin de (iii) proposer des         stratégies de rééducation adaptées permettant d’améliorer les         capacités phonologiques des enfants implantés.

   

Ces
        études porteront sur une population d’enfants implantés qui sera         comparée avec un groupe apparié d’enfants entendants. Le volet         « Production » s’appuiera sur l’enregistrement et l’analyse         acoustique de corpus de parole guidée et spontanée, et le volet         « Perception » sur des tests d’identification et de         discrimination de sons de parole.

   

Ce
        projet présente des enjeux théoriques et pratiques forts. Les         résultats pratiques permettront d’une part de diagnostiquer les         problèmes spécifiques persistant en production chez les enfants         implantés, afin de guider les pratiques de rééducation. Ils         permettront d’autre part de mieux décrire les étapes         chronologiques de la production et de la perception du langage         oral chez les enfants sourds porteurs d’implants, ce qui         fournira des repères cruciaux pour les thérapeutes.

   

Les
        apports théoriques de ce travail concernent la question du lien         entre production et perception de parole et la compréhension de         la construction d’un modèle interne pour la production de parole         au cours du développement. La seconde question théorique abordée         est la notion de période critique d’apprentissage et du rôle de         l’accès à des informations auditives pendant les premières         années de vie.

   

 

   

Mots-clés : handicap auditif - implant cochléaire – relations         perception / production en parole – trouble du développement du         langage – qualité de vie des enfants implantés

   

 

   

           
-- Anne Vilain Departement Parole et Cognition Laboratoire GIPSA-lab Universite Stendhal BP 25 38040 Grenoble cedex 9 tél: 00 33 4 76 82 77 85 fax: 00 33 4 76 82 43 35
   
Top

6-40(2013-06-19) PhD positions - Computer Science, Natural Language Processing - Marseille, France

PhD positions - Computer Science, Natural Language Processing - Marseille, France

The NLP group of the Laboratoire d'Informatique Fondamentale of the Aix Marseille University has 2 open PhD positions in Computer Science in the context of an European project (3 years).

Location : Campus de Luminy (http://sciences.univ-amu.fr/sites-geographiques/site-luminy), Marseille, France
Starting : October 1st 2013
Deadline for application: July 15th 2013

The two fully funded PhD studentships will focus on 2 different aspects of the project:
1- Discourse parsing of speech.
This PhD will focus on developing discourse analysis methods adapted to a large range of conversational styles and domains from spoken conversations to social media interaction. Automatic methods will be studied for five types of discourse analysis: discourse parsing, event and temporal structure, argumentation structure and intra-document coreference. The applicative domain of this research will be the automatic summarization task of human-human dialogs. The language targeted will be French and English.

2- Syntactic and semantic parsing with deep learning methods.
Recently, neural network approaches based on a deep learning paradigm has been successfully applied to some NLP tasks such as POS and NE tagging or dependency parsing ( http://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/de//pubs/archive/35671.pdf ). In this PhD we will investigate how this paradigm performs and can be adapted in the context of robust syntactic and semantic parsing of human human conversations collected on social media platforms and telephone call centres.

The successful candidates should :
- hold a relevant degree in the field of Natural Language Processing or Machine Learning.
- have good algorithmic and programming skills

Description of the lab:
The University of Aix-Marseille (AMU) is currently one of the largest university in France, created in 2012 from the fusion of the 3 former Aix-Marseille universities (Université de Provence, Université de la Méditerranée, Université Paul Cézanne).
The LIF (Fundamental Computer Science Lab), is a JRU between the Centre National de la Recherhe Scientifique (CNRS) and AMU. The Natural Language Processing group of LIF aims at developing symbolic and statistical methods for the automatic processing of textual and speech data. The two main characteristics of this research group are: (1) to host both linguists working on the development of rich linguistic resources such as syntactic and semantic lexicons and computer scientists with a very strong experience in numerical approaches for NLP; and (2) to work on both spoken and written languages, at the descriptive level as well as the application level thanks to a strong expertise of some members of the group in Automatic Speech Recognition methods.

Contacts:
Alexis Nasr : alexis.nasr@lif.univ-mrs.fr
Frederic Bechet : frederic.bechet@lif.univ-mrs.fr
Benoit Favre : benoit.favre@lif.univ-mrs.fr


Top

6-41(2013-06-19) Post-doc au LIMSI-CNRS Orsay, France

Post-doc au LIMSI-CNRS pour le projet ANR DIADEMS
   

   

Proposition d'un contrat post-doctoral d'un an dans le groupe       Traitement du Langage Parlé du LIMSI-CNRS (Orsay), pour projet ANR       DIADEMS (Description, Indexation, Accès aux Documents       Ethnomusicologiques et Sonores).
   

   

Description

    Le projet DIADEMS vise à développer une plateforme innovante     permettant la consultation, l'indexation et l'analyse automatique     d'archives sonores collectées par des ethno-musicologues et     ethno-linguistes. En particulier, des outils automatiques de     détection de parole, musique et chant ainsi que de structuration des     enregistrements en tours de parole et tours de chant doivent être     développés, en adéquation avec la nature très variée des archives     analysées. Pour plus d'informations, voir le site du projet : http://www.irit.fr/recherches/SAMOVA/DIADEMS/
   
   

Pré-requis

    Une thèse en traitement de la parole, de la musique ou du signal,     ainsi que des compétences en informatique (développements à réaliser     en langage Python)
   

Contacts

   

   

Calendrier

   

         
  • Date de démarrage prévue : septembre ou octobre 2013
  •      
  • Durée du contrat : 12 mois
  •    
Top

6-42(2013-06-19) CDD de 24 mois au laboratoire LACITO-CNRS
Profil de poste : 

CDD de 24 mois au laboratoire LACITO-CNRS

Niveau : Ingénieur d’études Contribution à la constitution de corpus de langues rares : textes et 
dictionnaires en ligne 
CONTEXTE Le projet HimalCo, financé par l’Agence Nationale de la 
(2013-2015), porte sur la constitution et l’exploitation de corpus pour dix langues à tradition orale. 
Les corpus sont composés de ressources sonores (enregistrements audio), textuelles (transcription,
 annotations) ainsi que de données lexicales
 (dictionnaires et enregistrements de mots) : http://himalco.hypotheses.org/ 
Les corpus et les outils issus du projet HimalCo iront à terme alimenter la plateforme de
 la collection Pangloss qui regroupe elle-même plus de 70 corpus de
 langues rares : http://lacito.vjf.cnrs.fr/archivage/index.htm 
MISSIONS La personne recrutée en CDD travaillera en étroite collaboration avec l’ingénieur 
responsable de la Collection Pangloss et participant au projet HimalCo. Elle devra rapidement 
faire preuve d’autonomie dans la réalisation des tâches qui lui sont confiées. Les tâches à effectuer 
pour le projet sont diverses. Voici une liste non exhaustive : 
- traitement et mise en forme des corpus : suivi des tâches, gestion des contacts avec les déposants, 
alignement texte/son, préparation et vérification de métadonnées... 
- dépôt de documents à l’archivage pérenne et mise à jour des pages web correspondantes sur le 
site de la Collection Pangloss 
- développement de fonctionnalités en ligne pour la consultation des textes parallèles et des 
dictionnaires
 - développement d’outils et mise à jour d’outils existants pour la mise en forme, la diffusion et
 la recherche dans les corpus
 - dialogue avec les partenaires de la Collection Pangloss 
- déploiement d’un outil logiciel de suivi des tâches (de la prise de contact initiale jusqu’au dépôt final) si le temps nécessaire peut être dégagé COMPETENCES - Connaissances en structuration de données textuelles (HTML, XML, XSL) et sonores (wav). - PHP - Perl - Java souhaité Capacité d’écoute pour comprendre les besoins et les pratiques des linguistes. Une expérience de l’étude et/ou du traitement de données linguistiques serait un plus. DUREE ET DATES La durée totale du contrat est de 24 mois. Les dates prévues sont : de novembre 2013 à octobre 2015 inclus. La date de début peut être avancée à septembre ou octobre 2013 si la personne recrutée le souhaite. Aucun engagement ne peut être pris concernant une prolongation du contrat au-delà de 24 mois : les possibilités sont soumises aux contingences des futurs Appels à projets de recherche (pour les CDD) et des créations de poste (pour les CDI).
 Contact : guillaume@vjf.cnrs.fr 

 

Top

6-43(2013-06-21) Technical Engineer/Scientist (Project Manager) position, specialized in Speech and Multimodal technologies at ELDA

The European Language resources Distribution Agency       (ELDA), a company specialized in Human Language Technologies       within an international context, acting as the distribution agency       of the European Language Resources Association (ELRA), is       currently seeking to fill an immediate vacancy for Technical       Engineer/Scientist (Project Manager) position, specialized in       Speech and Multimodal technologies.

   

Technical Engineer / Scientist         (Project Manager) in Speech and Multimodal Technologies

   

Under the supervision of the CEO, the       responsibilities of the Technical Engineer/Scientist include       designing/specifying language resources, setting up production       frameworks and platforms, carrying out quality control and       assessment. He/she will be in charge of renovating the current       language resources production workflows. This yields excellent       opportunities for young, creative, and motivated candidates       wishing to participate actively to the Language Engineering field.       He/she will be in charge of conducting the activities related to       language resources and Natural Language Processing technologies.       The task will mostly consist in managing language resources       production projects and co-ordinating ELDA’s participation in       R&D projects while being also hands-on whenever required by       the development team.

   

Profile :

   

         
  • PhD in computer science, speech,         audiovisual/multimodal technologies
  •      
  • Experience and/or good knowledge in speech data         collection, expertise in phonetics, transcription tools
  •      
  • Experience in speech recognition, synthesis,         speaker ID and the well-used packages (e.g., HTK, Julius, ...)         and the tools to produce, collect and assess quality of         resources and datasets
  •      
  • Experience and/or good knowledge of the Language         Technology area
  •      
  • Experience with technology transfer projects,         industrial projects, collaborative projects within the European         Commission or other international frameworks
  •      
  • Ability to work independently and as part of a         team, in particular the ability to supervise members of a         multidisciplinary team
  •      
  • Dynamic and communicative, flexible to combine         and work on different tasks
  •      
  • Good knowledge of Linux and open source software      
  •      
  • Proficiency in C++, PhP, Java, Django is a plus
  •      
  • Proficiency in French and English
  •      
  • Citizenship of (or residency papers) a European         Union country
  •    

   

Applications will be considered until the position       is filled. The position is based in Paris.

   

Salary : Commensurate with qualifications and       experience.

   

Applicants should email a cover letter addressing       the points listed above together with a curriculum vitae to :

   

Khalid Choukri

   

ELRA / ELDA

   

55-57, rue Brillat-Savarin

   

75013 Paris

   

FRANCE

   

Fax : 01 43 13 33 30

   

Mail : job@elda.org

   

ELRA was established in February 1995, with the       support of the European Commission, to promote the development and       exploitation of Language Resources (LRs). Language Resources       include all data necessary for language engineering, such as       monolingual and multilingual lexica, text corpora, speech       databases and terminology. The role of this non-profit membership       Association is to promote the production of LRs, to collect and to       validate them and, foremost, make them available to users. The       association also gathers information on market needs and trends.

   

For further information about ELDA/ELRA, visit :

   

http://www.elda.org      
      http://www.elra.info

Top

6-44(2013-06-22) Visiting Research Engineer; Linguistics Research Labs; Univ.Urbana-Champaign, Illinois, USA

aVisiting Research Engineer

Linguistics Research Labs

 

The School of Literatures, Cultures, and Linguistics at the University of Illinois at Urbana-Champaign has an opening for a full-time (100%) Visiting Research Engineer in its linguistics research labs. The Visiting Research Engineer works directly with faculty and graduate students to identify, implement and maintain appropriate hardware and software for research within the School of Literatures, Cultures and Linguistics.  Currently, the labs have facilities for high-quality audio and video capture, eye-tracking, speech aerodynamics, electropalatography, and event-related potentials: Phonetics and Phonology Lab (http://phonlab.linguistics.illinois.edu/); Second Language Acquisition Lab (http://www.bilingualismlab.illinois.edu/); Discourse, Social Interaction, and Translation Lab; Electrophysiology and Language Processing Lab. The position is renewable for an additional two years and is contingent on funding and strong annual performance reviews by the School of Literatures, Cultures, and Linguistics. The position may become regular at a later date. The target starting date is September 1, 2013. Salary is commensurate with qualifications and experience.

 

Responsibilities will be research-related only and will include: Training faculty and graduate student researchers in the use of hardware and software for research purposes (including occasional workshops); Holding scheduled consultations with faculty and graduate student researchers on their research projects; Oversight of data acquisition hardware; Assisting faculty and graduate student researchers in problem-solving hardware and software issues; Providing support to faculty and graduate student researchers in procedural programming languages (e.g., Python, R, Matlab); Helping standardize computing and programming procedures across labs; Database management; Digital signal processing; and Assistance with initial setup of pilot experiments.

 

At a minimum, qualified applicants must have a MA/MS in linguistics or closely related fields, (e.g., neuroscience, psychology, speech and hearing science with a concentration in linguistics or speech-related research). The applicant should also have a solid background in a procedural programming language (e.g., Python, Matlab, and/or R) and statistical modeling. Preference will be given to candidates who have previously worked in a laboratory setting, have a demonstrated ability to work well as part of a research team, and have experience using advanced hardware for data acquisition.  

 

To apply, create your candidate profile through the University of Illinois application login page at https://jobs.illinois.edu and upload your application materials: letter of application, CV, and names and contact information for three professional references. Referees will be contacted electronically upon submission of the application. Only electronic applications submitted through https://jobs.illinois.edu will be accepted.

To ensure full consideration, all required applicant materials must be received no later than July 22, 2013. Letters of reference must be received no later than July 29, 2013. The department highly recommends that complete applications be submitted prior to July 22, to ensure that referees have enough time to submit their letters of recommendation. 

 

For additional information, please contact slcl-hr@illinois.edu. Applicants may be interviewed before the closing date; however, no hiring decision will be made until after that date.

 

Illinois is an Affirmative Action /Equal Opportunity Employer and welcomes individuals with diverse backgrounds, experiences, and ideas who embrace and value diversity and inclusivity. (www.inclusiveillinois.illinois.edu).

 

 

 

Top

6-45(2013-06-27) 1-2 Assistant Professors in Speech Technology (main focus Dialogue Systems), KTH Stockholm Sweden

The Department of Speech, Music and Hearing at KTH, Stockholm, Sweden,  will hire 1-2 Assistant Professors in Speech Technology (main focus Dialogue Systems)
http://www.kth.se/en/om/work-at-kth/vacancies/assistant-professor-in-speech-technology-with-specialization-in-dialogue-systems-1.398565

Top

6-46(2013-06-29) Speech Scientist at Voicebox’s Research and Advanced Development Team, Munchen, Germany

VoiceBox is an acknowledged pioneer in the voice technology and application industry. Our

continued growth allows us to add to our diverse team of talented professionals.

Our opportunity is

your opportunity!

Because we work with some of the most respected brands in the world, you’ll not only work to high

standards but you’ll also get that “hey, I worked on that product” feeling. Even better, we’re small

enough for you to make a real impact - you’ll learn and grow quickly and never have that cog-in-thewheel-feeling.

We’re glad you’re considering joining the team!

SPEECH SCIENTIST

A Speech Scientist at Voicebox’s Research and Advanced Development Team is responsible for work

on complex tasks and work packages independently and provide a solution to the team. Work

packages in the area of ASR, TTS and NLU in research as well as project related. Typical work

packages are:

Tuning and maintaining speech applications

Design and develop new speech applications

Adapt speech resources for certain customers’ requirements

Research, Development and Implementation of new algorithms in ASR, TTS and NLU.

Software Integration of third party ASR or TTS products in VoiceBox Engine

Training and adaptation of acoustic models and language models

This position can be located in Germany, Munich or USA, Seattle Area

Key Requirements/Skills/Experience

Strong knowledge in ASR, TTS and NLU as well as statistical learning methods

Strong plus: Experience in ASR Training-Toolkits, like HTK

Strong plus: Working experience on ASR topics: e.g. as intern, for PHD

Deep knowledge in digital signal processing

Deep knowledge in programming languages like: Ansi-C, C++

Knowledge in scripting languages like: Perl, Python, Shell (bash, awk, sed)

Excellent communication skills, great attitude and team oriented

Good skills in English (Text and Spoken)

Foreign language skills a plus

Self-starter

To apply for this position, please send your resume to michaelw@voicebox.com.

Top

6-47(2013-06-30) Two positions in the area human machine dialog at Saarland University, Germany

Two positions in the area human machine dialog at Saarland University


We anticipate the availability of funds for two positions in the area of
dialog modeling and dialogue system design, one position for a PhD
candidate and a second position for a postdoctoral researcher.

The aim of the research project is to the development and testing of a
multimodal dialogue system for highly adaptive and flexible dialogue
management. Dialogue system will be designed to support negotiation
training games with the real and virtual agents. The research will be
carried out together with a European consortium (FP7 Programme) of
high-profile research institutes and companies.


The successful candidate should have a degree in computer science,
computational linguistics or a discipline with a related background.
Excellent programming skills are reqired (preferably in Java and C++),
as well as strong analytical and problem-solving skills. Some experience
in math, logics and cognitive modelling is a plus. Very good oral and
written communication skills in English are also required.

The successful candidate for a postdoc position additionally should have
a strong publication record in relevant venues and strong collaborative
skills, including possibly supervision of junior researchers, students,
or equivalent industrial experience.

This work will be conducted at the Spoken Language Systems group
(http://www.lsv.uni-saarland.de/) at Saarland University.

Saarland University

Saarland University (http://www.uni-saarland.de/en/) is a European
leader in Computer Science research and teaching, and is particularly
well-known for its research in Computational Linguistics and Natural
Language Processing. In addition, the university campus hosts the
interdisciplinary MMCI Cluster of Excellence, Max Planck Institute for
Computer Science, Max Planck Institute for Software Systems and German
Research Center for Artificial Intelligence (DFKI). Students and
researchers come from many countries and the research language is
English.


Both positions are fully funded positions with a salary in the range of
37,000 Euros to 51,000 Euros per year depending on the qualification and
professional experience of the successful candidates. Starting date is
the November 1st. The PhD position is for three years. The postdoc
position is for 2 years with the possibility of extension for one more
year.

Each application should include:

* Curriculum Vitae including a list of publications
 (if applicable)
* Transcript of records
* Short statement of interest (not more than half a
 page)
* Names of two references
* Any other supporting information or documents

Applications (documents in PDF format in a single file) should be sent
no later than , Monday July 15th to:
Diana.Schreyer@LSV.Uni-Saarland.De

Further inquiries regarding the project should be directed to:
Olga.Petukhova@LSV.Uni-Saarland.De


Top

6-48(2013-06-30) Post-doc position (information retrieval and language understanding) at Saarland University, Germany

Postdoc position in the area of information retrieval and language
understanding at Saarland University

We are seeking a skilled postdoctoral researcher whose expertise
intersects information retrieval (IR) and human-computer interaction
(HCI).The researcher will work in a research team to create an automated
speech-based question-answering (QA) system for various scenarios. This
research will done in cooperation with high profile partners in the US
and Europe.


The successful applicant will have:

1) a doctoral degree in a relevant field of computational linguistics,
computer science, or a relevant discipline;

2) a strong publication record in relevant venues;

3) excellent programming skills;

4) strong collaborative skills, including possibly supervision of junior
researchers, students, or equivalent industrial experience;

5) a strong technical background in machine learning, natural language
processing, and human-computer interaction.



This work will be conducted at the Spoken Language Systems group
(http://www.lsv.uni-saarland.de/) at Saarland University.

Saarland University

Saarland University (http://www.uni-saarland.de/en/) is a European
leader in Computer Science research and teaching, and is particularly
well-known for its research in Computational Linguistics and Natural
Language Processing. In addition, the university campus hosts the
interdisciplinary MMCI Cluster of Excellence, Max Planck Institute for
Computer Science, Max Planck Institute for Software Systems and German
Research Center for Artificial Intelligence (DFKI). Students and
researchers come from many countries and the research language is
English.

The planned starting date is the November 1st (an earlier starting date
is negotiable). The position is for 2 years with the possibility of
extension for one more year. The position is a fully funded position
with a salary in the range of 37,000 Euros to 51,000 Euros per year
depending on the qualification and professional experience of the
successful candidates.

Each application should include:

* Curriculum Vitae including a list of publications (if applicable)
* Transcript of records
* Short statement of interest (not more than half a page)
* Names of two references
* Any other supporting information or documents

Applications (documents in PDF format in a single file) should be sent
no later than , Monday July 15th to:
Diana.Schreyer@LSV.Uni-Saarland.De

Further inquiries regarding the project should be directed to:
Olga.Petukhova@LSV.Uni-Saarland.De



Top

6-49(2013-07-01) Postdoctoral position at TTI-Chicago

### Postdoctoral position at TTI-Chicago ###
A postdoctoral position is available at TTI-Chicago on topics at the intersection of speech processing and machine learning.  The ideal candidate will have completed (or be about to complete) a PhD degree in computer science, electrical engineering, statistics, speech and language technologies, or a related field, and strong mathematical and experimental skills.  The main duties of the postdoc will be his/her research activities in collaboration with his/her supervisor and other collaborators at TTI-Chicago and beyond; opportunities for teaching and advising may also be available if desired.
To apply, or for additional information, please contact Karen Livescu atmailto:klivescu@uchicago.edu.
TTI-Chicago is a philanthropically endowed academic computer science institute with an accredited PhD program situated on the University of Chicago campus.

Top

6-50(2013-07-01) Postdoctoral position INRIA Nancy Grand-Est (Nancy, France) - Speech Group, LORIA

INRIA Nancy Grand-Est (Nancy, France) - Speech Group, LORIA

Postdoctoral position

Accurate 3D Lip modeling and control in the context of animating a 3D talking head

Scientific Context

The lips play a significant role in audiovisual human communication. Several studies showed the

important contribution of the lips to the intelligibility of visual speech (Sumby & Pollack, 1954; Cohen

& Massaro 1990). In fact, it has been shown that human lips alone carry more than half the visual

information provided by the face (Benoît,1996). Since the beginning of the development of 3D virtual

talking heads, researchers showed interest to model lips (Guiard-Marigny et al., 1996, Reveret &

Benoît, 1998), as the lips increase intelligibility of the visual message. The existing models are still

considered as pure parametric and numerical models and do not take into account the dynamic

characteristic of speech. As audiovisual speech is highly dynamics, we consider that modeling this

aspect is crucial to provide a lip model that is accurately animated, and reflects the real articulatory

dynamics as observed in human vocal tract. In fact, the movement of the lips, even subtle, can

communicate relevant information to the human receiver. This is even more crucial for some

population such as hard-of-hearing people.

Missions

The goal of this work is to develop an accurate 3D lip model that can be integrated within a talking

head. A control model will also be developed. The lip model should be as accurate dynamically as

possible. When designing this model, the focus will be on the dynamics. For this reason, one can

start from a static 3D lip mesh, using a generic 3D lip model, and then we will use MRI images or 3D

scans to obtain more realistic shape of the lips. To take into account the dynamic aspect of the lip

deformation, we will use an articulograph (EMA) and motion capture technique to track sensors or

markers on the lips. The mesh will be adapted to this data. To control the lips, we will consider

allowing a skeletal animation to be controlled by the EMA sensors or motion capture markers, using

inverse kinematic technique, widely used in 3D modeling. In line with conventional skeletal animation,

an articulated armature rigged inside the mesh is mapped to vertex groups on the lip mesh by a

weight map that can be defined automatically from the envelope of the armature's shape and

manually adjusted if required, where manipulating the armature's components deforms the

surrounding mesh accordingly. The main challenge is to find the best topology of the sensors or

markers on the lips, to be able to better capture accurately its dynamics. The main outcome is to

accurately model and animate the lips based on articulatory data. It is very important to have

readable lips in that can be lip-read by hard-of-hearing people.

Bibliography

C. Benoît (1996). On the Production and the Perception of Audio-Visual Speech by Man and Machine.

Multimedia Communications and Video Coding, pp 277-284.

M. M. Cohen & D. W. Massaro (1990), Synthesis of visible speech. Behavioral Research Methods and

Instrumentation, 22, 260-263.

T. Guiard-Marigny, N. Tsingos, A. Adjoudani, C. Benoit, M.-P. Cani (1996). 3D Models of the Lips for Realistic

Speech Animation. Computer Animation 80-89

L. Reveret, C. Benoit (1998). A New 3D Lip Model for Analysis and Synthesis of Lip Motion in Speech

Production. Proc. AVSP'98, Terrigal, Australia, Dec. 4-6, 1998.

Sumby, W. H., & Pollack, I. (1954). Visual contribution to speech intelligibility in noise. Journal of Acoustic

Society of America, 26, 212-215.

Q. Summerfield (1987), 'Some preliminaries to a comprehensive account of audio-visual speech perception', In:

B. Dodd and R. Campbell, Editors, Hearing by Eye: The Psychology of Lip-Reading, Lawrence Erlbaum,

Hillsdale, NJ.

Competences

Required qualification: PhD in computer science Appropriate candidate would have good knowledge

in 3D modeling, speech processing and data analysis, as well as solid java programming skills.

Additional Information

Application deadline: 11 June 2013

Supervision and contact:

Slim Ouni ( Slim.Ouni@loria.fr ) http://www.loria.fr/~slim

Duration:

1 year (possibly extendable)

Starting date:

between Sept. 1st 2013 and Jan. 1st 2014

Salary:

2.620 euros gross monthly (about 2.135 euros net) medical insurance included.

Application Procedure

The required documents for an INRIA postdoc application are the following:

- CV, including a description of your research activities (2 pages max) and a short description of

what you consider to be your best contributions and why (1 page max and 3 contributions max); the

contributions could be theoretical, implementation, or industry transfers. Include also a brief

description of your scientific and career projects.

- The report(s) from your PhD external reviewer(s), if applicable.

- If you haven't defended yet, the list of expected members of your PhD committee (if known) and the

expected date of defense (the defense, not the manuscript submission).

- Your best publications, up to 3.

- At least one recommendation letter from your PhD advisor, and possibly up to two other letters. The

recommendation letter(s) should be sent directly by their author to the prospective postdoc advisor

All these documents should be sent

before June 11th

About INRIA

Established in 1967, Inria is the only public research body fully dedicated to computational sciences.

Combining computer sciences with mathematics, Inria’s 3,400 researchers strive to invent the digital

technologies of the future. Educated at leading international universities, they creatively integrate

basic research with applied research and dedicate themselves to solving real problems, collaborating

with the main players in public and private research in France and abroad and transferring the fruits of

their work to innovative companies. The researchers at Inria published over 4,800 articles in 2010.

They are behind over 270 active patents and 105 start-ups. The 171 project teams are distributed in

eight research centers located throughout France.

http://www.inria.fr/en/centre/nancy

Top

6-51(2013-07-02) Postdoc at MxR Lab at the University of Southern California Institute for Creative Technologies, Ca, USA

The MxR Lab at the University of Southern California Institute for Creative Technologies, located in Playa Vista, CA, is seeking a postdoctoral researcher.  Applicants should have a Ph.D.in computer science or related field and a strong research background in HCI, virtual environments, virtual humans, data visualization, novel user interfaces, or a similar area.

 

The University of Southern California (USC), founded in 1880, is located in the heart of downtown L.A. and is the largest private employer in the City of Los Angeles. As an employee of USC, you will be a part of a world-class research university and a member of the 'Trojan Family,' which is comprised of the faculty, students and staff that make the university what it is.

 

Initial appointment will be for one year with the possibility of renewal for subsequent years.  Please direct all inquiries to Evan Suma (suma@ict.usc.edu).

 

Applicants can apply online at:

http://jobs.usc.edu/applicants/Central?quickFind=70781 

Top

6-52(2013-07-03) poste d'ATER en informatique à l'UFR de Sociologie et d'Informatique pour les Sciences Humaines de l'Université Paris Sorbonne.

un poste d'ATER en informatique est disponible à l'UFR de Sociologie et d'Informatique pour les Sciences Humaines de l'Université Paris     Sorbonne.

Le  candidat enseignera l’Informatique dans les différentes  formations de licence et de master du département d’Informatique,  Mathématiques et de Linguistique appliquées. Il devra s'inscrire dans un ou plusieurs axes de l'équipe de linguistique    computationnelle (www.stih.paris-sorbonne.fr/) : Sémantiques  et connaissances, Paralinguistique de la parole et du texte, Jugements d’évaluation, opinions   et sentiments.
     

    La date limite  de candidature est le 4 septembre   2013.
        Personne à contacter : Claude.Montacie@Paris-Sorbonne.fr

Top

6-53(2013-07-09) PhD position at Trinity College, Dublin, Ireland

 

PhD Title: Birdsong Forensics for Species Identification and Separation

Studentship: Full Scholarship, including fees (EU/Non EU) plus annual stipend of €16,000.

Start Date: Sept 2nd 2013 

PhD Supervisor: Dr. Naomi Harte, Sigmedia Group, Electronic & Electrical Engineering, Trinity College Dublin, Ireland

Collaborator: Dr. Nicola Marples, Zoology, Trinity College Dublin, Ireland.

Background:

The analysis of birdsong has increased in the speech processing community in the past 5 years. Much of the reported research has concentrated on the identification of bird species from their songs or calls. Smartphone apps have been developed that claim to automatically identify a bird species from a live recording taken by the user. A lesser reported topic is the analysis of birdsongs from subspecies of the

same bird. Among experts, bird song is considered a particularly effective way of comparing birds at species level. Differences in song may help uncover cryptic species. In many species, such as those living in the high canopy, catching the birds in order to obtain morphological (e.g. weight, bill length, wing length etc.) and genetic data may be time consuming and expensive. Identifying potentially interesting populations by the detection of song differences, allows any such effort to be better targeted.

Birdsong presents many unique challenges as a signal. The use of signal processing and machine learning techniques for birdsong analysis is at a very early stage within the ornithological research community. This PhD project seeks to lead the way in defining the state of the art for forensic birdsong analysis. Comparing birdsongs will push out the boundaries of feature analysis and classification techniques in signal processing. The research will develop new algorithms to systematically quantify levels of similarity in birdsong, transforming the comparison of birdsong in the natural sciences arena. The results will be of importance internationally for the study, monitoring, and conservation of bird populations.

Requirements:

The ideal candidate for this position will:

 Have a primary degree (first class honours) in Electronic Engineering, Electronic and Computer Engineering or a closely related discipline.

 Possess strong written and oral communication skills in English.

 Have a strong background and interest in digital signal processing (DSP)

 Be mathematically minded, and be curious about nature.

Experience in Matlab is a distinct advantage.

Application:

Interested candidates should send an email to Dr. Naomi Harte at nharte@tcd.ie. The email

MUST include the following:

 Candidate CV (max 2 pages)

 A short statement of motivation (half page)

 Scanned academic transcripts

 Name and contact details for TWO academic referees

Incomplete applications may not be considered.

About the Sigmedia Group at TCD

Dr. Naomi Harte is an expert in Human Speech Communication. Her principal areas of focus are audio visual speech processing, speaker verification for biometrics and forensics, emotion in speech, speech processing in hearing aids and speech quality.

She is a leader of the Sigmedia Group at TCD (www.sigmedia.tv) within the School of Engineering. Over the past 5 years, Sigmedia has been awarded research income of over €3million and published 73 peer reviewed papers. The group currently has 3 academic and 3 post-doctoral staff along with 12 research students. The work of Sigmedia is supported by research grants from Science Foundation Ireland, Enterprise Ireland, Irish Research Council, Google and DTS.

 

Top

6-54(2013-07-10) Research Assistant in Computational Psycholinguistics, Univ. Maryland, USA

Research Assistant in Computational Psycholinguistics

The Department of Linguistics at the University of Maryland is looking to fill a full-time

position for a post-baccalaureate researcher, starting September 1, 2013 or as soon as

possible thereafter. Salary is competitive, with benefits included. This person will be

involved in computational psycholinguistics research, with a focus on using techniques

from automatic speech recognition to better understand human speech perception. The

person will have the opportunity to develop skills in Bayesian modeling and signal

processing and will be part of a vibrant language science community that numbers 200

faculty, researchers, and graduate students across 10 departments.

The position would be ideal for individuals with a BA degree who are interested in

gaining significant research experience in a very active research group as preparation for

a research career. Applicants must be US or Canadian citizens or permanent residents,

and should have completed a BA or BS degree by the time of appointment. Previous

experience in cognitive science as well as familiarity with mathematics, computer

science, or signal processing is preferred. This is a 1 year initial appointment with

possibility of extension.

Applicants should submit a cover letter outlining relevant background and interests, a

current CV, and names and contact information for 3 potential referees. Reference letters

are not needed as part of the initial application. Applicants should also send a writing

sample. Applications should be submitted by email to Dr. Naomi Feldman,

nhf@umd.edu

, with 'Research Assistantship' in the subject line. Review of applications

will begin immediately and will continue until the position is filled.

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA