ISCA - International Speech
Communication Association


ISCApad Archive  »  2014  »  ISCApad #197  »  Jobs

ISCApad #197

Wednesday, November 12, 2014 by Chris Wellekens

6 Jobs
6-1(2014-06-05) Thesis grant in Neurophysiological Investigation of prosodic cues.... Univ. Toulouse II -III, F

Subject : « NEUROPROS- Neurophysiological Investigation of prosodic cues processing by monolingual French and Spanish speakers, and bilingual speakers (French-Occitan and French-Spanish) »

Supervisors: Barbara Köpke, Denis Fize, Corine Astésano, Radouane El Yagoubi

Host Laboratories:

U.R.I Octogone-Lordat (EA 4156), Université de Toulouse II

CERCO (UMR 5549), Université Paul Sabatier - Toulouse III

Discipline: Linguistics

Doctoral School: Comportement, Langage, Education, Socialisation, Cognition (CLESCO)

Scientific description of the research project:

The project falls within an eminently interdisciplinary approach (linguistics, cognitive

neuropsychology and neurosciences) aiming at studying prosodic cues processing by

monolingual and bilingual French speakers. French is a language with so-called post-lexical,

non-distinctive accentuation, contrary to languages like Spanish, Catalan or Occitan where

accentual patterns are represented in the lexical entry. These prosodic characteristics have

lead to consider French as a ‘language without accent’ (Rossi, 1980), which makes it difficult

for this language to be integrated in models of speech processing (Cutler et al, 1997) since

they are mostly based on the metrical and accentual characteristics of languages (Cutler &

Norris, 1988). Also, these prosodic characteristics are said to be responsible for some degree

of ‘stress deafness’ by French listeners in foreign languages (Dupoux et al, 1997, inter alia).

However, if one considers the French accentual system in all its complexity, taking into

account the interaction between the primary final accent and the secondary initial accent in

the marking of prosodic constituents (Di Cristo, 2000), it becomes possible to postulate a role

of French accentuation in speech segmentation and lexical access strategies (Bagou &

Frauenfelder, 2006). More particularly, the Initial Accent seems to play a predominant role in

the marking of prosodic constituents in French (Astésano et al, 2007) and it is clearly

perceived by naïve listeners (Astésano et al, 2012). Recent neuroimaging studies (EEG)

indicate that metric incongruity slows lexical access in French (Magne et al, 2007). More

recently, we showed in a MisMatch Negativity paradigm that French listeners can readily

discriminate stress patterns in French and that the Initial Accent is encoded in long-term

memory at the level of the lexical word in French (Aguilera et al, 2014).

It is now necessary to consolidate these results by extending our investigations to other EEG

paradigms and by adapting the protocols to fMRI, in order to more precisely describe the

neural substrates and the temporal dynamics of prosodic cues processing in French.

Furthermore, these processing strategies have been observed on monolingual speakers only.

Comparing the linguistic strategies of monolingual and bilingual speakers (French, Spanish

and/or Catalan monolinguals, French/Occitan – French/Spanish or French/Catalan bilinguals)

will not only allow us to considerably enrich our comprehension of lexical access

mechanisms in these languages with different prosodic systems, but also to observe the

influence of the use of several languages with different stress patterns on the perception and

processing of prosodic cues.

The selected candidate will benefit from a stimulating scientific environment: (s)he will

integrate the Interdisciplinary Research Unit Octogone-Lordat (Toulouse II :

http://octogone.univ-tlse2.fr/) and will be co-supervised by Prof. Barbara Köpke, a specialist

on bilingualism, and by Dr. Denis Fize at the Research Centre on Brain and Cognition

(CERCO, Toulouse III), a researcher in Neurosciences and neuroimaging specialist. The

research will take place in the frame of a research group managed by Dr. Corine Astésano, a

specialist in prosody, and with Dr. Radouane El Yagoubi, a specialist of cognitive

neurosciences and psychology. The project is also connected to the French ANR research

project PhonIACog (http://aune.lpl-aix.fr/~phoniacog/) managed by Dr. Corine Astésano.

Bibliography

Aguilera, M. ; El Yagoubi, R. ; Espesser, R. ; Astésano, C. (2014). Event Related Potential investigation of Initial Accent

processing in French. Speech Prosody 2014, Dublin, U.K., May 20-23 2014 : 383-387.

Astésano, C.; Bard, E.; Turk, A. (2007) Structural influences on Initial Accent placement in French. Language and Speech,

50 (3), 423-446.

Astésano, C.; Bertrand, R.; Espesser, R.; Nguyen, N. (2012). Perception des frontières et des proéminences en français. JEPTALN-

RECITAL 2012, Grenoble, 4-8 juin 2012: 353-360.

Bagou, O., & Frauenfelder, U. H. (2006). Stratégie de segmentation prosodique: rôle des proéminences initiales et finales

dans l'acquisition d'une langue artificielle. Proceedings of the XXVIèmes Journées d'Etude sur la Parole, 571-574.

Cutler, A., & Norris, D. (1988). The role of strong syllables in segmentation for lexical access. Journal of Experimental

Psychology: Human perception and performance, 14(1), 113.

Cutler, A., Dahan, D., & Van Donselaar, W. (1997). Prosody in the comprehension of spoken language: A literature review.

Language and speech, 40(2), 141-201.

Di Cristo, A. (2000). Vers une modélisation de l'accentuation du français (seconde partie). Journal of French Language

Studies, 10(01), 27-44.

Dupoux, E., Pallier, C., Sebastian, N., & Mehler, J. (1997). A destressing “deafness” in French?. Journal of Memory and

Language, 36(3), 406-421.

Magne, C.; Astésano, C.; Aramaki, M.; Ystad, S.; Kronland-Martinet, R.; Besson, M. (2007) Influence of Syllabic

Lengthening on Semantic Processing in Spoken French: Behavioral and Electrophysiological Evidence. Cerebral Cortex

2007, 17(11), 2659-2668. doi: 10.1093/cercor/bhl174.

Rossi, M. (1980). Le français, langue sans accent?. Studia Phonetica Montréal, 15, 13-51.

Required skills:

- Master in Linguistics, cognitive sciences, neuropsychology or equivalent

- Experience in experimental phonetics and/or linguistics, psycholinguistics,

neurolinguistics

- Skills in signal processing (speech, EEG, fMRI) required, and dedication to the

development of these skills is essential

- Experimental skills wished, as well as a yearning for contact with participants and

motivation for the recruitment of participants

- Autonomy and motivation for learning new skills

- Good knowledge of French and English; knowledge of Spanish, Catalan, Occitan an

asset.

Salary:

- 1 684.93monthly gross (1 368net), 3 year contract

Calendar:

- Sending of applications: 27th june 2014

- Audition of selected candidates: 3rd july 2014

- Start of contract: 1rst october 2014

Applications must be sent to Corine Astésano (corine.astesano at univ-tlse2.fr) and will

include:

- A detailed CV, with list of publications if applicable

- A copy of grades for the Master’s degree

- A summary of the Master’s dissertation and a pdf file of the Master’s dissertation

- A cover letter / letter of interest and/or scientific project (1 page max.)

- The names and email addresses of 2 referent scientific personalities/ supervisors.

Top

6-2(2014-06-012) 2 PhD scholarships at Italian Institute of Technology, Genova, Italy

 

1

 

1. Acoustic-articulatory modeling for automatic speech recognition

Tutors: Leonardo Badino, Lorenzo Rosasco, Luciano Fadiga

Department: Robotics Brain and Cognitive Sciences (Italian Istitute of Technology), Genova, Italy

http://www.iit.it/rbcs

Description: State-of-the art Automatic Speech Recognition (ASR) systems produce remarkable results in some scenarios but still lags behind human level performance in several real usage scenarios and often perform poorly whenever the type of acoustic noise, the speaker’s accent and speaking style are 'unknown' to the system, i.e., are not sufficiently covered in the data used to train the ASR system.

The goal of the present theme is to improve ASR accuracy by learning representations of speech that combine the acoustic and the (vocal tract) articulatory domain as opposed to purely acoustic representations, which only consider the surface level of speech (i.e., speech acoustics) and ignore its causes (the vocal tract movements). Although in real usage settings the vocal tract cannot be observed during recognition it is still possible to exploit the articulatory representations of speech where phonetic targets (i.e., the articulatory targets necessary to produce a given sound) are largely invariant (e.g., to speaker variability) and complex (in the acoustic domain) speech phenomena have simple descriptions.

Joint acoustic-articulatory modeling will be applied in two different ASR training settings: a typical supervised machine learning setting where phonetic transcriptions of the training utterances are provided by human experts, and a weakly supervised machine learning setting where much sparser and less informative labels (e.g., word-level rather than phone level labels) are available.

Requirements: The successful candidate will have a degree in computer science, bioengineering, physics or related disciplines, and a background in machine learning. Interest in neuroscience.

Reference: King, S., Frankel, J., Livescu, K., McDermott, E., Richmond, K., Wester, M. (2007). 'Speech production knowledge in automatic speech recognition'. Journal of the Acoustical Society of America, vol. 121(2), pp. 723-742.

Contacts: leonardo.badino@iit.it, lorosasco@mit.edu, luciano.fadiga@iit.it 2

2. Speech production for automatic speech recognition in human–robot verbal interaction

Tutors: Giorgio Metta, Leonardo Badino, Luciano Fadiga

Department: iCub Facility (Istituto Italiano di Tecnologia), Genova, Italy

http://www.iit.it/iCub

Description: State-of-the art Automatic Speech Recognition (ASR) systems produce remarkable results in partially controlled scenarios but still lags behind human level performance in unconstrained real usage situations and perform poorly whenever the type of acoustic noise, the speaker’s accent and speaking style are 'unknown' to the system, i.e., are not sufficiently covered in the data used to train the ASR system. The goal of this PhD theme is to attack the problem of ASR in a human to robot conversation. To this aim, we will create a robust Key Phrases Recognition system where commands delivered by the user to the robot (i.e., the key phrases) have to be recognized in unconstrained utterances (i.e., utterances with hesitations, disfluencies, additional out-of-task words, etc.), in the challenging conditions of human-robot verbal interaction where speech is typically distant (to the robot) and noisy. To increase the robustness of the ASR, articulatory information will be integrated into a Deep Neural Network – Hidden Markov Model system.

This work will be carried out and tested on the iCub platform.

Requirements: background in computer science, bioengineering, computer engineering, physics or related disciplines. Solid programming skills in C++, Matlab, GPU (CUDA) are a plus. Attitude for problem solving. Interests in understanding/learning basic biology.

Reference: Barker, J., Vincent, E., Ma, N., Christensen, H., Green, P., (2013) 'The PASCAL CHiME Speech Separation and Recognition Challenge'. Computer Speech and Language, vol. 27(3), pp. 621-633.

Contacts: leonardo.badino@iit.it, giorgio.metta@iit.it, luciano.fadiga@iit.it

Additional information

Starting date: November 2014.

PhD scholarship: the scholarship will cover all fees with a gross salary of 16500 euros/year (≈1250 euros/month after taxes)

Top

6-3(2014-06-11) Post-doc position at IMMI-CNRS

Post-doc position at IMMI-CNRS

A post-doctoral position is proposed at IMMI-CNRS (Orsay, France - http://www.immi-labs.org/). IMMI is an International Joint Research CNRS Unit (UMI) in the field of Multimedia and Multilingual Document Processing. It gathers three contributing partners: LIMSI-CNRS, RWTH Aachen and KIT (Karlsruhe Institute of Technology).

Context of the project

The project relies on an experimental platform for online monitoring of social media and information streams, with self-adaptive properties, in order to detect, collect, process, categorize, and analyze  multilingual streams.  The platform includes  advanced linguistic analysis, discourse analysis, extraction of entities and terminology, topic detection,  translation and the project includes studies on unsupervised and cross-lingual adaptation.

Requirements and objectives

A PhD in a field related to the project (translation, natural language processing or machine learning) is required. The candidate will perform research in the framework mentioned above, and will supervize collection and annotation of the data. Salary will follow CNRS standard rules for contractual researchers, according to the experience of the candidate.

Contacts

  • Gilles Adda (adda [at] immi-labs.org)

Agenda

  • Opening date: August 2014
  • Application deadline: Open until filled
  • Duration: 24 months

   

Top

6-4(2014-06-12) Postdoc position on conversation summarization, Univ.Aix-Marseille France
Postdoc position on conversation summarization
(Full time, one year - Closing date for applications 2014-07-01)

We are looking for an outstanding research scientist to join the
'SENSEI' european project (http://www.sensei-conversation.eu/). You
will contribute to conversation analysis summarization research to
allow the exploitation of large quantity of comments in social
media and spoken conversations.

Job description:
You will contribute to the design and development of speech and text
summarization technologies for conversational data such as social
media comments and tweets. There will be three components to the
system: linguistic analysis of the conversations, content selection
and aggregation, and generation of the summaries (text or other
media). The approach is expected to make use of recent machine
learning advances such as deep learning, and focus on limiting the
quantity of supervision needed. The prototype will be evaluated by
end-user professionals in ecological conditions.

Profile:
The applicant must hold a PhD degree, preferably in the field of
natural language processing or machine learning. He/she should:
- Be proficient in one of Java / C++ programming and python or php scripting
- Have experience with developing efficient NLP / machine learning systems.
- Be keen on researching the literature and writing papers
- Enjoy team work and be autonomous

Location:
You will work at the LIF computer science lab at Aix-Marseille
University in France, at the Luminy campus next to the
calanques.

Dates:
Interviews will be held in July 2014, the Postdoc will start in
september / october 2014 and last one year.

Contact:
Enquiries and applications should be sent to Benoit Favre:
benoit.favre@lif.univ-mrs.fr

SENSEI project page: http://www.sensei-conversation.eu/
Top

6-5(2014-06-18) Two positions at the University of Cambridge, UK
 
Top

6-6(2014-06-21) 3 PhD Positions in Speech Processing at LIG/Grenoble (France)

3 PhD Positions in Speech Processing at LIG/Grenoble (France)

 
The Study Group for Machine Translation and Automated Processing of Languages and Speech (GETALP) of LIG (Laboratory of Informatics of Grenoble) offers 3 PhD Positions in Speech Processing. We are looking for outstanding young research scientists to join the group on several projects involving speech processing.
 
Opened Positions
 
  1. PhD  / Automatic speech recognition and machine assisted speech annotation for African Languages
You will work in the context of the ALFFA project which is really interdisciplinary since it not only gathers technology experts (LIG, LIA, VOXYGEN) but also includes fieldwork linguists/phoneticians (DDL). The PhD will focus on analysing the capabilities of existing automatic speech processing systems to investigate phonetic characteristics of languages or annotate speech (especially on mobile devices: tablets, glasses, etc) to provide an innovative digital assistant to the fieldwork linguist.
Start : Fall 2014
Duration : 36 months
Particular aspect : co-supervision with DDL lab in Lyon
Project Web Site : http://alffa.imag.fr
 

  1. PhD / Speech interaction for socio-affective ubiquitous agents and robots in ambient assisted living environments

You will work on a research and development project (CASSIE) involving academic and industrial stakeholders of spoken dialog, assistive technologies, affectives sciences and social robotics. The PhD objective is to design a spoken dialogue system that will interact with a user in her/his home through an ubiquitous (physical and/or virtual) and personalized agent. This dialogue system will be corpus based, with iterative machine learning approach hydride with boostrap expert knowledge (observed from “intelligent” annotations) from spontaneous and ecological data collected in real or quasi-real environment (Smart Home) and situation (real scenario). The system will focus on the socio-affective dimensions of the interaction (socio-affective prosody, paralinguistic events, imitation, synchrony etc), especially the dynamics (timing) of the dialog… One aspect of this PhD will also focus on  the comparison of the same character implemented in robot versus virtual agent for interaction (epathy aspects, etc.).

 

Start : Fall 2014

Duration : 36 months

Contact : Veronique.Auberge@imag.fr & Benjamin.Lecouteux@imag.fr (+Laurent.Besacier@imag.fr)


3. PhD / Context-aware spoken dialogue in ambient assisted living environments

  1. You will work on a research and development project (CASSIE) involving academic and industrial stakeholders of spoken dialog, assistive technologies and social robotics. The PhD objective is to make a social cyber-physical agent 'aware'  of its environment by sensors and/or connected objects. This contextual information will drive the system interaction (natural language understanding and dialog). The heart of the research will be to build probabilistic and logical models for multimodal situation analysis and understanding in a domestic and multilingual context. For the experimental development and validation, the research will benefit from the fully-equipped LIG smart home (DOMUS).
    Start : Fall 2014
    Duration : 36 months (PhD)
 
Profiles The applicants must hold a Master degree in Computational Linguistics, Computing sciences or Cognitive Sciences preferably with experience in the fields of speech processing and/or natural language processing and/or machine learning. Good background in programming will also be required. 
He/she will also be involved in experimenting the technology with human participants being either French or English speakers. For this reason good English level is required as well as a good command of French. Finally effective communication skills in English, both written and verbal are mandatory.
 
Location Grenoble is a high-tech city with 4 universities. It is located at the heart of the Alps, in outstanding scientific and natural surroundings. It is 3h by train from Paris ; 2h from Geneva ; 1h from Lyon ; 2h from Torino and is less than 1h from Lyon international airport.
 
Research Group Website : http://getalp.imag.fr 
 
Dates Interviews will be held in July 2014 (until September 2014 if needed). Meetings during Interspeech 2014 in SIngapore can be also organized.
Top

6-7(2014-06-22) Two PhD student positions in phonetics or speech science, Saarland University, Saarbrücken, Germany

Two PhD student positions in phonetics or speech science, Saarland
University, Saarbrücken, Germany

Closing date 5 July 2014 (open until filled), positions starting 1
October 2014

http://www.coli.uni-saarland.de/~moebius/page.php?id=jobs

Top

6-8(2014-06-25) RESEARCH FACILITATOR IN SPEECH TECHNOLOGY - CLOUDCAST NETWORK, Univ. Sheffield, UK

RESEARCH FACILITATOR IN SPEECH TECHNOLOGY - CLOUDCAST NETWORK

Applications are invited for a position as Research Facilitator in the Speech and Hearing (SPandH) research group and the Centre for Assistive Technology and Connected Healthcare at Sheffield University to work on CloudCAST, a recently-awarded international network funded by the Leverhulme Trust and coordinated by Professor Phil Green. The vision of CloudCAST is

'.. to provide a way in which rapid developments in machine learning and speech technology can be placed in the hands of professionals who deal with speech problems: therapists, pathologists, teachers, assistive technology experts.. We intend to do this by creating a free-of-charge, remotely-located, internet-based resource 'in the cloud' which will provide a set of software tools for personalised speech recognition, diagnosis, interactive spoken language learning and the like. We will provide interfaces which make the tools easy to use for people who are not speech technology experts and create a self-sustaining CloudCAST community to manage future development.'

CloudCAST involves collaboration with


The Facilitator will be responsible for the software engineering required to build the CloudCAST resource. This involves taking algorithms and data which have been developed for research and knitting them into a form that

  •     is accessible to people who are not experts in speech technology,
  •     has a uniform look-and-feel,
  •     allows for amendments and additions,
  •     encourages others to contribute
  •     is available over the internet ('resides in the cloud').


The Facilitator may also become involved with pilot research studies using the resource, and will be responsible for organising and participating in an extensive series of visits between the 4 sites involved.

A good degree in Computer Science, Software Engineering, Mathematics or a closely-related subject. Is required for all applicants.  An appointment at Grade 7 will require a PhD. in speech technology or equivalent industrial experience. Applicants should have knowledge of speech technology and software engineering skills. The supporting documentation gives details. You can view the documentation by clicking on About the Job and About the University located near the top of your screen.

This is a full-time post, available now.

For supporting documentation and details of how to apply, visit

http://www.jobs.ac.uk/job/AIW424/research-facilitator-in-speech-technology-cloudcast-network/

Informal enquiries to Professor Phil Green, p.green@shef.ac.uk

Closing Date: 30th June 2014.

 

Top

6-9(2014-06-27) Research professorship at KU Leuven ESAT/PSI – Audio and/or Speech Processing
Research professorship at KU Leuven ESAT/PSI – Audio and/or Speech Processing The division ESAT/PSI (Processing Speech & Images, http://www.esat.kuleuven.be/psi) performs 
fundamental and applied research in the broad field of audio-visual information processing. The 
research is multidisciplinary and integrates expertise from engineering, physics, mathematics, 
medicine, linguistics, machine learning and computational science. New methods are developed and 
validated in computer vision, medical imaging, speech and audio processing and other application 
fields. PSI is one of the leading labs in its areas of research. The division is part of the EE 
department (ESAT) of the University of Leuven, the largest and highest-ranked university in 
Belgium. Leuven lies about 25 km east of Brussels and 15 minutes from Brussels airport by train.

To strengthen and widen its research domain, the PSI division is looking for a research 
professor in the area of audio and/or speech processing. The focus is on the interpretation of 
large amounts of these data, possibly in combination with other sensorial data (eg. images). We 
live in an environment where sound is ubiquitous and the interpretation of speech and other 
sounds is crucial for safety, for communication, for understanding of our environment, ... In 
many applications the ability of a computer to achieve human-like performance in this respect is 
highly desired and worldwide a lot of research effort is spent to achieve this goal. We want to 
expand our own research lines by hiring a new professor to enlarge the existing group with new 
projects and researchers exploring new ideas and paradigms that advance the state-of-the-art in 
this area.

The candidate must be an internationally recognized researcher, with a strong publication 
record. At the start of the mandate he/she must have at least 3 years of experience in 
scientific research as a postdoc, with hands-on experience in supervising PhD students. 
Experience with successful project grant writing is a definite plus. He/she also needs to 
possess didactic qualities. The position is primarily research-oriented, but the applicants must 
be prepared and are also expected to undertake limited teaching assignments. Applicants should 
be prepared to learn Dutch.

Entering research professors are appointed with a rank depending on their qualifications. Young 
researchers with at least 3 years and less than 7 full years of postdoctoral experience at the 
time of the appointment are typically offered a Tenure Track position, without excluding a 
higher academic position. Advanced researchers with at least 7 years of postdoctoral experience 
at the time of appointment are typically hired as a full professor, without excluding a Tenure 
Track position.

Applications should include a CV (incl. a complete publication list) and an abstract (1-2 pages) 
of a research proposal for the coming five years. They should be submitted by e-mail as soon as 
possible but ultimately before August 31st, 2014 to

Katholieke Universiteit Leuven
Department of Electrical eEgineering - ESAT
Center for Processing Speech and Images - PSI
Kasteelpark Arenberg 10 bus 2441
3001 Heverlee, Belgium
E-Mail: patrick dot wambacq at esat dot kuleuven dot be

Applicants may be invited to give a seminar to the staff of the research division ESAT/PSI. 
Subsequently, promising candidates will be asked to participate to the university-wide selection 
procedure for research professorships. Each year the KU Leuven appoints a number of research 
professors. These positions are financed by a university fund called 'BOF' (Bijzonder 
Onderzoeksfonds) that is funded by the Flemish Government.
Top

6-10(2014-06-26) Two 2-year post-doctoral position, Univ. Aix-Marseille, FR

Call for a two 2-year post-doctoral position

Laboratoire Parole et Langage (UMR 7309 Aix-Marseille Université / CNRS)

Aix-en-Provence, France

Principal investigator: Serge Pinto, Ph.D

 

Dysarthria in Parkinson’s disease: Lusophony vs. Francophony comparison (FraLusoPark)

Parkinson’s disease (PD) is classically characterized by a symptomatic triad that includes rest tremor, akinesia and hypertonia and although the motor expression of the symptoms involves mainly the limbs, the muscles implicated in speech production are also subject to specific dysfunctions. Motor speech disorders, so-called dysarthria, can thus be developed by PD patients. The main objective of our project is to evaluate the physiological parameters (acoustics), perceptual markers (intelligibility) and psychosocial impact of dysarthric speech in PD, in the context of language (French vs. Portuguese) modulations. PD patients will be enrolled in the study in Aix-en-Provence, France and Lisbon, Portugal. The proposed position refers to the data acquisition and analysis in the French site (Aix-en-Provence).

In order to achieve the goals of this project, 1 post-doctoral position is proposed to a young and dynamic researcher. The candidate, who should have experience in speech sciences research (acoustics, perception, prosody), will participate to the acquisition and analysis of speech data.

This project benefits from a bilateral ANR/FCT financial support (for the French side: project n° ANR-13-ISH2-0001-01).

 

Interested candidates should contact the principal investigator by sending:

-          a detailed CV

-          a letter of motivation

-          letters of recommendation (optional)

 

Duration of the position: 2 years (full-time)

Monthly salary: 2 000 € net

Application deadline: 2014, September 30th

Starting date: 2014, November 1st

For supplementary information and applications: serge.pinto@lpl-aix.fr



Top

6-11(2014-07-01) Postdoc position in Speech Synthesis, Saarland University, Saarbrücken, Germany

*Postdoc position in Speech Synthesis* (full-time, 2 years
from October 2014, extendable) at Saarland University, Saarbrücken, Germany

Please link to
http://www.coli.uni-saarland.de/~steiner/job_advertisement.pdf

Top

6-12(2014-07-15) Contrat doctoral ; Réduction phonétique en français, Univ. Paris 3
Le LabEx EFL (Empirical Foundations of Linguistics) offre un contrat doctoral de 3 ans.

REDUCTION PHONETIQUE EN FRANCAIS

Le sujet proposé concerne la réduction phonétique en parole continue, la variabilité intra- et interlocuteur ainsi que les liens entre réduction phonétique, prosodie et intelligibilité de la parole.  

 Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Martine Adda-Decker et Cécile Fougeron, Directeurs de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle

 Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités se stature internationale, des formations, des colloques et des écoles d'été.

  • Conditions

- avoir une bonne maitrise de la langue française (parlée et écrite).

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

Le/la candidate devra avoir des connaissances et compétences en traitement de données acoustiques et/ ou articulatoires (ultrason, video, EGG…)  . Des connaissances en informatique et en analyses statistiques seraient un plus. 

  • Pièces à joindre pour la candidature
  1. un CV
  2. une lettre de motivation
  3. le mémoire de master 2
  4. une lettre de recommandation 
  5. le nom de deux référents (avec leurs adressse courriel)
Date limite de candidature: 20 septembre 2014
  • Présélection sur dossier et
    Auditions 
    des candidats présélectionnés

Les candidats présélectionnés seront auditionnés fin septembre (entre le 24 et le 30 septembre) sur place ou par visio-conférence.  

Contact: 
Martine Adda-Decker, directeur de recherche CNRS
madda@univ-paris3.fr

Adresse pour la candidature: 
madda@univ-paris3.fr
ILPGA
19 rue des Bernardins
75005 Paris
Université Paris 3
 

Top

6-13(2014-07-16) POST-DOCTORAL POSITION at LPL - AIX-EN-PROVENCE, FRANCE

POST-DOCTORAL POSITION FOR THE PROJECT PhonIACog (LPL - AIX-EN-PROVENCE, FRANCE)
*****************************************************************************************************

We invite applications for a one-year Post‐Doctoral position at the Laboratoire Parole et Langage (LPL, Aix-Marseille Université, CNRS, UMR 7309, France), to work on the project PhonIACog  (- The role of the Initial Accent in prosodic structuring in French - From phonology to speech processing- Main coordinator : Corine Astésano, Université de Toulouse 2).

•    Description
The PhonIACog project is funded by the The French National Research Agency (ANR).

The present project aims at describing the characteristics of the French accentual system in order to bring to light the underlying phonological structure of this language. It addresses the status of the bipolar pattern /IA FA/ (initial accent-final accent), considered as the basic metric pattern in French. We propose to apply the same analyses to different corpora, from laboratory speech to semi-controlled speech and dialogic spontaneous interaction. The production studies will allow us to refine the acoustic-phonetic characterization of IA and FA, with potential application to automatic detection of prosodic cues on large, spontaneous corpora.

More information is available at the project website: http://aune.lpl-aix.fr/~phoniacog/

•    Job description
The post‐doctoral fellow will be mainly involved in data processing. He/she will participate in the acoustic analyses and will then have to implement the statistical analyses planned in the project.

•    Qualifications
A Ph.D. in linguistics (experimental phonetics/prosody) or in computer science and solid competence/experience in statistics and data analysis are required.  Experience in processing and analysis of large speech database is also welcome.

•    Application procedure
Candidates should send a detailed CV with a list of publications, and a cover letter with statement of research interests and details of their experience in data analysis.
Please e-mail documents to: roxane.bertrand@lpl-aix.fr (Roxane Bertrand, Scientific coordinator LPL, Aix-en-Provence, France).

Deadline for submission: September 30, 2014
Expected start date: November 2014 (with some flexibility.)
Length of contact: 12 months
Salary: about €2000/month including health care



Top

6-14(2014-08-10) Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Poste d'ingénieur recherche et développement, Traitement automatique de la langue, Avignon. Sept 2014.

Le Laboratoire d'Informatique d'Avignon (Université d'Avignon, lia.univ-avignon.fr)) et le Laboratoire d'Informatique Fondamentale de Lille (http://www.lifl.fr/) cherchent à pourvoir un poste d'ingénieur recherche et développement.

Dans la cadre du projet ANR MaRDi (Man-Robot Dialogue) il s'agit d'un **CDD de 12 mois** basé à Avignon (avec séjours ponctuels à Lille) axé sur le développement d'une plateforme de dialogue oral. L'architecture de la solution existante, basée sur des approches statistiques état de l'art, devra être complètement repensée afin d'améliorer sa modularité et aussi de permettre son exploitation sous forme de service web afin de développer les approches de collecte de données en ligne (crowdsourcing).

D'excellentes compétences en programmation et architecture logicielle sont attendues. Plus précisément les compétences recherchées incluent :
- Langage de programmation C++ ou Java ;
- Langages de script Python, Perl ;
- Développement Web : HTML/Javascript/CSS/XML, web services (REST...), serveurs d'application (Tomcat, Glassfish...) ;
- Traitement automatique de la langue : connaissances générales sur les systèmes d'interactions vocales (reconnaissance et interprétation de la parole, gestion du dialogue, génération de texte, synthèse de parole...).

Le salaire prévu est de 2400 euros brut / mois et peut varier selon le profil et l'expérience. Le poste est ouvert aux débutants (récents diplômés d'école d'ingénieur ou de Master, spécialités informatique ou 'linguistique - informatique' si couplée avec des connaissances significatives en informatique) mais aussi aux récents docteurs possédant de bonnes capacités en programmation.

Le profil possède une forte dominante développement, toutefois le travail attendu s’inscrit dans les activités de recherche de deux groupes de recherche très actifs et se prête à publications (post-doc possible). En fonction de la qualité de l'ingénieur/docteur recruté, les deux laboratoires bénéficient d'un fond constant de projets permettant d'envisager la prolongation du contrat à l'issue des 12 mois.

Les candidats peuvent envoyer un dossier (cv, motivation et éventuelles lettres de recommandation, en pdf) à fabrice.lefevre-_-à-_-univ-avignon.fr. La date de démarrage souhaitée est **septembre/octobre 2014**. L'offre reste valide jusqu'au recrutement d'un candidat.
==========================================================================

-- 
Fabrice Lefèvre, LIA-CERI-Univ. Avignon
BP 91228, 84911 Avignon Cedex 9, FRANCE
tel 33 (0)4 90 84 35 63/ fax - - - - 01
Top

6-15(2014-08-18) Two postdoc positions in speech processing at the Department of Signal Processing and Acoustics, Aalto University, Finland.

Department of Signal Processing and Acoustics, Aalto University (formerly known as the Helsinki University of Technology), is looking for outstanding candidates for two postdoc positions:

 

 

 

Postdoc position in Computational Modeling of Language Acquisition

The speech technology group (led by Prof. Unto Laine) at Aalto University works on computational modeling of language acquisition, perception and production. The overall goal is to understand how spoken language skills can be acquired by humans or machines through communicative interaction and without supervision. The research in our topic involves cross-disciplinary effort across fields such as machine learning, signal processing, speech processing, linguistics, and cognitive science. The research is funded by the Academy of Finland.

We are currently looking for a postdoc to join our research team to work on our research themes, including:

 

  • pattern discovery from speech

  • articulatory modeling and inversion

  • modeling and methods for autonomous acquisition of lexical, phonetic and grammatical structure from speech input

  • multimodal statistical learning (associative learning between multiple input domains such as speech, articulation and vision).

 

Postdoc: 2 years. Starting date: as soon as possible.

 

Send your application, CV and references directly by email to

D.Sc. (Tech.) Okko Räsänen, okko.rasanen at aalto.fi

 

Postdoc position in Speech Synthesis and Voice Source Analysis

The speech communication technology research group (led by Prof. Paavo Alku) at Aalto University works on interdisciplinary topics aiming at describing, explaining and reproducing communication by speech. The main topics of our research are: analysis and parameterization of speech production, statistical parametric speech synthesis, enhancement of speech quality and intelligibility in mobile phones, robust feature extraction in speech and speaker recognition, occupational voice care and brain functions in speech perception.

We are currently looking for a postdoc to join our research team to work on the team’s research themes, particularly in the following topics:



  • statistical speech synthesis

  • voice source analysis

  • speech intelligibility improvement

 

Postdoc: 1-3 years. Starting date: January 2015

 

Send your application, CV and references directly by email to

Prof. Paavo Alku, paavo.alku at aalto.fi

 

All positions require a relevant doctoral degree in CS or EE, skills for doing excellent research in a group, and outstanding research experience in any of the research themes mentioned above. The candidate is expected to perform high-quality research and assist in supervising PhD students. Please send your application email with the subject line “Aalto post-doc recruitment, autumn 2014”.

In Helsinki you will join the innovative international computational data analysis and ICT community. Among European cities, Helsinki is special in being clean, safe, liberal, Scandinavian, and close to nature, in short, having a high standard of living. English is spoken everywhere. See, e.g., http://www.visitfinland.com/

 

 

Top

6-16(2014-08-15) 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Positions: 2 (W/M) researchers positions at IRCAM for Large-Scale Audio Indexing
Starting:     September 1st, 2014
Duration:     12 months
Deadline for application:   As Soon As Possible

 

 

The BeeMusic project aims at providing the description of music for large-scale collections (several millions of music titles). In this project IRCAM is in charge of the development of music content description technologies (automatic genre or mood recognition, audio fingerprint …) for large-scale music collections.

 

Position description 201406BMRESA:

 

For this project IRCAM is looking for a researcher for the development of the technologies of automatic genre and mood recognition.

 

The hired Researcher will be in charge of the research and the development of scalable technologies for supervised learning (i.e. scaling GMM, PCA or SVM algorithms) to be applicable to millions of annotated data.
He/she will then be in charge of the application of the developed technologies for the training of large-scale music genre and music mood models and their application to large-scale music catalogues.
 
Required profile:
* High skill in audio indexing and data mining (the candidate must hold a PHD in one of these fields)
* Previous experience into scalable machine-learning models
* High-skill in Matlab programming, skills in C/C++ programming
* Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation)
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Position description 201406BMRESB:

 

For this project IRCAM is looking for a researcher for the development of the technologies of audio fingerprint.

 

The hired Researcher will be in charge of the research and the development of audio fingerprint technologies that are robust to audio degradations (sound capture through mobile-phones in noisy environment) and fingerprint search algorithms in large-scale database (millions of music titles).
 
Required profile:
* High skill in audio signal processing and audio fingerprint design (the candidate must hold a PHD in one of these fields)
* High skill in indexing technologies and distributed computing (hash-table, Hadoop, SOLR)
* High-skill in Matlab programming, skills in Python and Java programming
* Good knowledge of Linux, Windows, MacOS environments
* High productivity, methodical works, excellent programming style.

 

The hired researcher will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).

 

Introduction to IRCAM:
 
IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

 

 

Salary:
According to background and experience

 

 

Applications:
Please send an application letter with the reference 201406BMRESA or 201406BMRESB togetherwith your resume and any suitable information addressing the above issues preferably by email to: peeters_at_ircam dot fr with cc to vinet_at_ircam dot fr, roebel_at_ircam dot fr

 

VERSION FRANCAISE:

 

 

Offre d’emploi : 2 postes de chercheur (H/F) à l’IRCAM pour technologies d’indexation audio à grande échelle
Démarrage : 1er Septembre 2014
Durée : 12 mois
Date limite pour candidature: Le plus rapidement possible

 

 

Le projet BeeMusic a pour objectif de décrire la musique à grande échelle (plusieurs millions de titres musicaux). Dans ce projet, IRCAM est en charge du développement des technologies de description du contenu audio (reconnaissance automatique du genre et de l’humeur musicale, identification audio par fingerprint …) pour des grands catalogues musicaux.

 

Description du poste 201406BMRESA:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies de reconnaissance automatique de genre et humeur.

 

Le/la chercheur(se) sera en charge de la recherche et des développements concernant la mise à l’échelle des technologies d’apprentissage supervisée (passage à l’échelle des algorithmes GMM, PCA ou SVM), afin de permettre leur application à des millions de données. Il/elle sera en charge de l’application de ces technologies pour l’entrainement de modèles de genre et humeur musicale ainsi que de leur application à des grands catalogues.

 

Profil requis:      
* Très grande expérience en algorithmes d’apprentissage automatique et en techniques d’indexation (le candidat doit avoir un PHD dans un de ces domaines)
* Expérience de passage à l’échelle des ces algorithmes
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation C/C++
* Bonne connaissance du traitement du signal (analyse spectrale, extraction de descripteurs audio, estimation de paramètres) 
* Bonne Connaissance des environnements Linux, Windows et Mac OS-X.
* Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participera aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Description du poste 201406BMRESB:

 

Pour ce projet, l’IRCAM recherche un/une chercheur(se) pour le développement des technologies d’identification audio par fingerprint.

 

Le/la chercheur(se) sera en charge de la recherche et du développement de la technologie d’identification audio par fingerprint robuste aux dégradations sonores (capture du son à travers un téléphone mobile en environnement bruité) et des algorithmes de recherche des fingerprints dans une très grande base de données (plusieurs millions de titres musicaux).

 

Profil requis:      
*Très bonne connaissance en traitement du signal et en conception d’audio fingerprint (le candidat doit avoir un PHD dans un de ces domaines)
*Très bonne connaissance en techniques d’indexation et de systèmes distribués (hash-table, Hadoop, SOLR)
* Très bonne connaissance de la programmation Matlab, connaissance de la programmation Python et Java
*Bonne connaissance des environnements Linux, Windows et Mac OS-X.
*Haute productivité, travail méthodique, excellent style de programmation, bonne communication rigueur

 

Le/la chercheur(se)  collaborera également avec l’équipe de développement et participeront aux activités du projet (évaluation des technologies, réunion, spécifications, rapports).

 

Présentation de l’Ircam:

 

L'Ircam est une association à but non lucratif, associée au Centre National d'Art et de Culture Georges Pompidou, dont les missions comprennent des activités de recherche, de création et de pédagogie autour de la musique du XXème siècle et de ses relations avec les sciences et technologies. Au sein de son département R&D, des équipes spécialisées mènent des travaux de recherche et de développement informatique dans les domaines de l'acoustique, du traitement des signaux sonores, des technologies d’interaction, de l’informatique musicale et de la musicologie. L'Ircam est situé au centre de Paris à proximité du Centre Georges Pompidou au 1, Place Stravinsky 75004 Paris.

 

 

Salaire:
Selon formation et expérience professionnelle

 

 

Candidatures:
Prière d'envoyer une lettre de motivation avec la référence 201406BMRESA ou 201406BMRESB  et un CV détaillant le niveau d'expérience/expertise dans les domaines mentionnés ci-dessus (ainsi que tout autre information pertinente) à peeters_a_t_ircam dot fr avec copie à
vinet_a_t_ircam dot fr, roebel_at_ircam dot fr

 

 
Top

6-17(2014-08-18) Deux postes en dialogue naturel chez Orange, Lannion

2 offres pour l'équipe NADIA de dialogue en langage naturel de Orange Labs à Lannion.

 

- CDI Ingénieur de recherche data scientist & analyse statistique (f/h) : http://orange.jobs/jobs/offer.do?joid=40611&lang=fr&wmode=light

 

- CDD Post doc Etude du dialogue vocal et tactile avec les objets connectés dans l'environnement domestique, depuis un 'wearable device' : http://orange.jobs/jobs/offer.do?joid=40954&lang=fr&wmode=light

 

Postes ouverts aux non francophones mais pas de description en anglais.

Top

6-18(2014-08-25) Postdoctoral positions at the LIMSI-CNRS lab, Orsay, France

Postdoctoral positions are available at the LIMSI-CNRS lab. The
positions are all one year, with possibilities of extension.  We are
seeking researchers in machine learning and natural language processing.

Topics of interest include, but are not limited to:
- Speech translation
- Bayesian models for natural language processing
- Multilingual topic models
- Word Sense Disambiguation
- Statistical Language Modeling

Candidates must possess a Ph.D. in machine learning or natural
language/speech processing. Please send your CV and/or questions to
Alexandre Allauzen (allauzen@limsi.fr) and François Yvon
(yvon@limsi.fr).

Duration: 12 months, starting Fall or Winter 2014, with a possibility
to extend for an additional 12 months.

Application deadline: Open until filled

The successful candidates will join a dynamic research team working on
various aspects of Statistical Machine Translation and Speech
Processing. For information regarding our activities, see
http://www.limsi.fr/Scientifique/tlp/mt/

About the LIMSI-CNRS:
The LIMSI-CNRS lab is situated at Orsay, a green area 25 km south of
Paris. A suburban train connects Orsay to Paris city center. Detailed
information about the LIMSI lab can be found at http://www.limsi.fr

Top

6-19(2014-08-28) poste d’ingénieur de recherche en CDI à Orange Labs à Rennes, France

Annonce de poste d’ingénieur de recherche en CDI
à Orange Labs à Rennes :

http://orange.jobs/jobs/offer.do?joid=40644&lang=fr

Il s’agit d’un poste de Data Scientist en analyse de données non
structurées : textes ou contenus multimédia de diverses natures (livre,
presse, tweets, photos, podcasts audio et vidéo...).

Top

6-20(2014-09-09) Offre de contrat doctoral par le Laboratoire d'Excellence Fondements empiriques de la linguistique

*** extension de date limite au 5 octobre 2014 ***

Le LabEx EFL (Empirical Foundations of Linguistics) offre un contrat doctoral de 3 ans.

REDUCTION PHONETIQUE EN FRANCAIS

Le sujet proposé concerne la réduction phonétique en parole continue, la variabilité intra- et interlocuteur ainsi que les liens entre réduction phonétique, prosodie et intelligibilité de la parole.  

 Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Martine Adda-Decker et Cécile Fougeron, Directeurs de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle

 Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités se stature internationale, des formations, des colloques et des écoles d'été.

  • Conditions

- avoir une bonne maitrise de la langue française (parlée et écrite).

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

Le/la candidate devra avoir des connaissances et compétences en traitement de données acoustiques et/ ou articulatoires (ultrason, video, EGG…)  . Des connaissances en informatique et en analyses statistiques seraient un plus. 

  • Pièces à joindre pour la candidature
  1. un CV
  2. une lettre de motivation
  3. le mémoire de master 2
  4. une lettre de recommandation 
  5. le nom de deux référents (avec leurs adressse courriel)

Date limite de candidature: (20 septembre 2014 repoussé au **5 OCTOBRE 2014**

  • Présélection sur dossier et

Auditions 

des candidats présélectionnés

Les candidats présélectionnés seront auditionnés mi-octobre sur place ou par visio-conférence.  

Contact: 

Martine Adda-Decker, directeur de recherche CNRS

madda@univ-paris3.fr

Adresse pour la candidature: 

madda@univ-paris3.fr

ILPGA

19 rue des Bernardins

75005 Paris

 

Université Paris 3

 

Labex EFL: http://www.labex-efl.org/

Référence: http://www.labex-efl.org/?q=fr/node/261

 

 

Top

6-21(2014-09-17) Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
Post-doctoral positions at LIMSI-CNRS, Paris Saclay (Paris Sud)
University.

LIMSI is a multi-disciplinary research unit that addresses the automatic
processing of human language for a range of tasks.

LIMSI invites applications for 1 one-year Postdoctoral position in
Natural Language Processing.  The topic is as follows:

Dialogue management in a human-machine dialogue system where the
system plays the role of a patient during a medical consultation with
a doctor.

CONTEXT

The postdoctoral fellow will contribute to the following
project:

Patient Genesys (http://www.patient-genesys.com/): in the
framework of continuous medical education, the goal of the project is to
design and develop a framework for virtual patient-doctor consultation.
This is a collaborative project including a hospital and small and
medium enterprises.

JOB REQUIREMENTS

- Ph.D. in Computer Science, Natural Language Processing, Computational
  Linguistics
- Solid programming skills
- Strong publication record
- A good command of French is a plus
- Knowledge of medical terminologies and ontologies is a plus

ADDITIONAL INFORMATION

Net salary: between 2000 and 2400 ??? per month according to experience

Benefits: LIMSI offers a generous benefit package including health
insurance and 44 days vacation pa.

Duration: 12 months, renewable depending on performance and funding
availability

Start date: 1st October 2014

Location: Orsay, greater Paris area, France

TO APPLY

Please send:
* a cover letter
* a curriculum vitae, including a list of publications
* the names and contact information of at least two referees
to both:
     Sophie Rosset (rosset@limsi.fr)
     Anne-Laure (annlor@limsi.fr)
     Pierre Zweigenbaum (pz@limsi.fr)

Application deadline:  October 15th, 2014
Applications will be examined in the following week.

ABOUT LIMSI-CNRS

LIMSI is a laboratory of the French National Center for Research (CNRS),
a leading research institution in Europe.

LIMSI is a multi-disciplinary research unit that covers a number of
fields from thermodynamics to cognition, encompassing fluid mechanics,
energetics, acoustic and voice synthesis, spoken language and text
processing, vision, visualisation and perception, virtual and augmented
reality.

LIMSI hosts about 200 researchers, professors, research support staff
and graduate students. It is located in a green area about 30 minutes
south of Paris.
Top

6-22(2014-09-18) Post-doc position at LORIA (Nancy, France)

 

Post-doc position at LORIA (Nancy, France)

Automatic speech recognition: contextualisation of the language model by dynamic adjustment

Framework of ANR project ContNomina

The technologies involved in information retrieval in large audio/video databases are often based on the analysis of large, but closed, corpora, and on machine learning techniques and statistical modeling of the written and spoken language. The effectiveness of these approaches is now widely acknowledged, but they nevertheless have major flaws, particularly for what concern proper names, that are crucial for the interpretation of the content.

In the context of diachronic data (data which change over time) new proper names appear constantly requiring dynamic updates of the lexicons and language models used by the speech recognition system.

As a result, the ANR project ContNomina (2013-2017) focuses on the problem of proper names in automatic audio processing systems by exploiting in the most efficient way the context of the processed documents. To do this, the post-doc student will address the contextualization of the recognition module through the dynamic adjustment of the language model in order to make it more accurate.

Post-doc subject

The language model of the recognition system (n gram learned from a large corpus of text) is available. The problem is to estimate the probability of a new proper name depending on its context. Several tracks will be explored: adapting the language model, using a class model or studying the notion of analogy.

Our team has developed a fully automatic system for speech recognition to transcribe a radio broadcast from the corresponding audio file. The postdoc will develop a new module whose function is to integrate new proper names in the language model.

Required skills

A PhD in NLP (Natural Language Processing), be familiar with the tools for automatic speech recognition, background in statistics and computer program skills (C and Perl).

Post-doc duration

12 months, start during 2014 (these is some flexibility)

Localization and contacts

Loria laboratory, Speech team, Nancy, France

Irina.illina@loria.frdominique.fohr@loria.fr

Candidates should email a letter of application, a detailed CV with a list of publications and diploma



Top

6-23(2014-10-02) PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

PhD position in Multimedia indexing at Eurecom, Sophia Antipolis, France

http://www.eurecom.fr/en/content/multimedia-indexing

DESCRIPTION

This thesis is part of a collaborative project to analyze the multimedia information
that is published on the Internet about cultural festivals, either by professional
or by the public. These data are diverse (text, images, videos) and published on various
sources (twitter, blogs, forums, catalogs). The project aims at analyzing and structuring
this information in order to better understand the public and the cultural practices, and
to recombine them to build synthetic view of these collections. This thesis will focus
on the aspects of video analysis and multimodal fusion.

This thesis will tackle two problems:
- to develop techniques for the automatic analysis of video content, so that the collected
data may be categorized in predefined categories. This component will be used to structure
the collections and better understand their content and their evolution.
- to study mechanisms to recombine the multimedia content, and build synthetic views of the
collections. Several strategies will be studied, depending on whether these views are intended
for professional users or for the general public.

The research will focus in particular on mechanisms to automatically construct semantic classifiers,
and on fusion techniques for the results of these classifiers. The recombination aspects will
involve methods for the selection of important segments, followed by an assembly strategy according
to the expected objective. A specific attention will be paid to evaluation techniques that will
allow to measure the performance of different approaches.

APPLICATION

PhD applicants are expected to have a Master degree, with honors. This research
involves a strong knowledge in signal processing and machine learning, as well as
a good experience of programming in Matlab and C/C++.
English is mandatory, French is a plus, but not required.

Candidates are invited to submit a resume, transcripts of grades for at least the last two year,
3 references, and motivation for the position.
Applications should be sent to the Eurecom secretariat, secretariat@eurecom.fr, with the subject
MM_BM_PhD_REEDIT_Sept2014

EURECOM

EURECOM is a leading teaching and research institution in the fields of information and communication
technologies (ICT). EURECOM is organized in a consortium form combining 7 European universities
and 9 international industrial partners, with the Institut Mines-Telecom as a founding partner.
Our 3 fundamental missions are:
- Research: focused on Networking and Security, Multimedia Communications and Mobile Communications.
- High level education: with graduate and postgraduate courses in communication systems, plus
three Master of Sciences programs entirely dedicated to foreign students.
- Doctoral program: in cooperation with several doctoral schools, supported by our research collaborations
with various partners, both industrial and academic, and funded by various sources, including national
and european programs.

Top

6-24(2014-10-05) Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis, Greece

Experienced researcher in the field of Expressive and/or Multimodal Text-to-Speech Synthesis

As part of its commitment to continuously reinforce its excellence and strengthen its capacities in language technologies, the Institute of Language and Speech Processing (ILSP – http://www.ilsp.gr/en) of the 'Athena' Research and Innovation Center (http://www.athena-innovation.gr/en.html) announces a position for experience researchers in the area of:

Expressive and/or Multimodal Text-to-Speech Synthesis

The position is opened in the context of the LangTERRA project (www.langterra.eu) which is co-funded by the Seventh Framework Programme of the European Union (Grant Agreement No. 285924 FP7-REGPOT-2011-1). LangTERRA represents ILSP’s strong commitment to continuously reinforcing its excellence and strengthening its capacities and potential to excel at a European level in language technologies and related fields.

The candidates are expected to have a strong background and experience in one or more of the indicative topics in the list below:

n  Speech synthesis, analysis and feature extraction;

n  Capturing, analysing, modelling and synthesizing natural affective communication patterns with emphasis on reproducing them in the context of speech synthesis;

n  Applications that involve handling real affective speech, such as dialog systems

The selected experienced researchers will strengthen ILSP's research capacity by working as part of the respective ILSP team and bringing additional experience and know-how in this field. They are also expected to engage into networking with prominent research organizations throughout Europe, developing new collaborations and initiating proposals for funded research.

The required qualifications are:

n  a PhD in a field closely related to the position;

n  at least four years of full-time equivalent recent research experience with in-depth involvement in R&D projects;

n  a strong academic profile with high quality publications in international conferences and journals; and

n  a full professional proficiency in English.

The recruited experienced researchers will be offered a contract with ILSP. The duration of the contract will be 6 months, and could be extended depending on the availability of the necessary resources. The indicative monthly rate for the position will be 4.000 Euros depending on the status, qualifications and experience of the selected candidate.

The submitted applications must include:

n  A cover letter describing the special qualities of the applicant and her/his reasons for applying to the position;

n  Full curriculum vitae (CV), including a list of publications;

n  An electronic copy of the PhD degree; and

n  At least two recommendation letters.

The applicants should be available for a Skype interview. ILSP may come in contact with the persons providing the recommendation letters. Upon request, the applicant should be able to provide also copies of relevant diplomas and transcripts of academic records. The diplomas should be in English or Greek, else a formal translation in one of these languages should be provided.

The closing date for applications is October 24th, 2014 and will remain open until filled.

 

Submission:

The applications must be submitted electronically through email to both addresses below.

To: spy@ilsp.gr; vpana@ilsp.gr

The subject of the email should indicate: 'Application for LangTERRA – Expressive and/or Multimodal Text-to-Speech Synthesis'

 

Enquiries:

All enquiries related to the positions should be addressed to:

Dr. Spyros RAPTIS,
Email: spy@ilsp.gr,
Tel. +30 210 6875 -407, -300

 

More information is available at: http://www.ilsp.gr/el/news/75-jobs2/261-text2speech

 

Top

6-25(2014-10-06) Two positions at M*Modal, Pittsburg, PA, USA

Speech Recognition Researcher

Location:  Pittsburgh, PA

Available: Immediately

 

About M*Modal:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate should have a working knowledge of both Speech Recognition and Computer Algorithms and be able to learn about new technologies from original publications. 
As a member of our Speech R&D team composed of Software Engineers and Speech Recognition Researchers, you will help define the algorithms and architecture used in our next generation of products.

 

Essential Functions:

The incumbent will be expected to use his/her programming skills to work with our team on research, technology development, and applications in speech recognition.

 

We offer work with:

  • Systems that make a real difference in the lives of patients and doctors

  • Language models, Acoustic models, Front-ends, decoders, etc, based on our own state-of-the-art modular recognition toolkit

  • Training data per speaker varying from seconds to hundreds of hours – both transcribed and unsupervised.

 

Qualifications:

  • MS or PhD in Electrical Engineering, Computer Science or related field

  • Top-level awareness of all aspects of large vocabulary speech recognition

  • In-depth understanding of several components of state-of-the-art speech recognition

  • C++, Java, Perl, or Python programming

  • Experience with both Linux and Windows

  • Experience participating in large software development projects.

  • Experience with large-scale LVCSR projects or evaluations is a definite plus

  • Enthusiasm to work directly on a real production system

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

 

 

Language Developer

Location:  Pittsburgh, PA

Available:  Immediately

 

 

 

About Us:

M*Modal is a fast-moving speech technology and natural language understanding company, focused on making health care technology work better for the physicians and hospitals who depend on it every day. From speech-enabled interfaces and imaging solutions to computer-assisted physician documentation and natural language analytics, M*Modal is changing what technology can do in healthcare.

 

Position Summary:

The prospective candidate would help to improve our speech recognition systems based on our own state-of-the-art recognizer toolkit. This involves processing text for building language models for different applications, experimenting with different types of language models, training locale-optimized acoustic models, improving pronunciation dictionaries, and testing, tuning and troubleshooting speech recognition systems. As we work with very large amounts of data, data processing is run across a Linux cluster.

 

Qualifications:

  • BS or MS degree in Computer Science or related field

  • Background in related specialty, such as linguistics, machine learning or statistics

  • Programming skills, such as Java or Python

  • Working knowledge of Linux and Windows environments

  • Working knowledge of regular expressions

  • Prior experience with automatic speech recognition systems is of course a plus

 

 

If you want to be part of a thriving, innovative organization that fosters great talent, please submit your resume and salary requirements by email to anthony.bucci@mmodal.com.

 

Top

6-26(2014-10-08) Informaticien TAL à l'INSERM (CépiDc) val de Marne France

 

Le CépiDc est situé à l’hôpital du Kremlin-Bicêtre (Val de Marne). Il a pour missions principales de produire les données nationales de mortalité par cause, de diffuser, d'assister les utilisateurs et de mener des recherches sur ces données.

Le CépiDc est centre collaborateur OMS pour la Famille des Classifications Internationales (FCI) en langue française.

Le Centre d'épidémiologie sur les causes médicales de décès (CépiDc) de l’Inserm recrute :

Un informaticien en traitement automatique du langage (TAL)

Description du poste

Contexte :

La production de la statistique des causes médicales de décès se fonde sur la réception de près de 550 000 certificats de décès par an, dont environ 6% sont transmis par voie électronique (via www.certdc.inserm.fr). Cette proportion devrait augmenter sensiblement dans un futur proche.

Les certificats papiers et électroniques ont le même format structuré, conforme au modèle préconisé par l'OMS. Bien que la structure du certificat incite les médecins à séparer des entités nosologiques (des maladies, états morbides ou traumatismes), le texte rédigé est relativement libre et nécessite dans la majorité des cas un traitement automatique de standardisation. Celui-ci vise à bien séparer les entités nosologiques, à reconstituer leur ordre de causalité et à corriger les fautes d'orthographe. Après standardisation, un code de la classification internationale des maladies (CIM) est attribué à chaque entité nosologique à l'aide d'un index (comptant environ 160 000 entrées aujourd'hui).

Alors que le texte des certificats papiers est manuellement saisi et standardisé par une entreprise externe au service, le texte des certificats électroniques fait uniquement l'objet d'application de règles syntaxiques simples, qui rendent nécessaire et conséquent un traitement manuel du texte avant l’exploitation par Iris.

Missions

Dans le cadre de la production de la base des causes médicales de décès, l'agent aura pour missions principales :

- le suivi de la qualité de la saisie des certificats de décès,

- l'automatisation du traitement du texte médical pour l’accélérer et améliorer sa qualité,

- la participation à l’alerte sanitaire.

Activités

- Assurer le suivi du marché externalisé de saisie des certificats de décès,

- Développer les règles de traitement automatique du texte médical avec les outils existant dans le service,

- Lister les modifications nécessaires non prises en charge par les règles de traitement automatique du langage proposées par les outils existants,

- Participer à une revue des méthodes existantes de traitement automatique du langage à mobiliser pour prendre en charge ces modifications,

- Mettre en oeuvre et tester différentes méthodes de traitement automatique du langage, maximisant la proportion de texte standardisé et minimisant la proportion d’erreurs provoquée par le traitement

- Mettre à jour la liste des expressions présentes dans l’index afin de minimiser sa taille, de faciliter sa maintenance et de pouvoir ainsi le transmettre à d’autres pays francophones.

Spécificité du poste

- Les données traitées par le CépiDc sont de nature médicale et strictement confidentielle.

Profil recherché

Connaissances :

- Des méthodes de traitement automatique du langage (TAL) : grammaires formelles, syntaxe formelle, analyse syntaxique automatique,

- Des langages de programmation (C, Perl, Python…) et de gestion de bases de données (SQL),

- Lecture de l'anglais scientifique.

Savoir-faire :

- Développement et adaptation de méthodes TAL à une nouvelle problématique,

- Evaluation des performances obtenues par les méthodes,

- Rédaction de documentation méthodologique (rapport, article),

- Gestion des relations avec un prestataire extérieur.

Aptitudes :

- Capacité de formalisation de problématique de traitement du texte,

- Capacité à travailler en équipe avec des acteurs variés (médecins, nosologistes, statisticiens, épidémiologistes),

- Rigueur,

- Esprit d'initiative.

Contrat proposé

Contrat à durée déterminée : temps plein de 12 mois renouvelable

Rémunération : entre 2 031 et 2 465 € bruts selon l’expérience et le niveau de formation par référence aux grilles de l’Inserm

Date de prise de fonction : 01/12/2014

Formation

BAC +3/5 en linguistique informatique, spécialité traitement automatique du langage (Licence, Master, école d’ingénieur…).

Expérience professionnelle souhaitée :

Débutant accepté

Pour postuler, merci d’envoyer CV et lettre de motivation à : Grégoire Rey

Directeur du CépiDc de l'Inserm

gregoire.rey@inserm.fr

Tel : 01 49 59 18 63

Top

6-27(2014-10-11) Disney Research - Open positions for postdoc candidates and internships for PhD students, Pittsburgh, PA, USA

Disney Research - Open positions for postdoc candidates and internships for PhD students

 

Disney Research, Pittsburgh, is announcing several positions for outstanding postdoctoral candidates and internships for PhD students in areas related to speech technology, multimodal conversational systems, interactive robotics, child-robot interaction, human motion modelling, tele-presence, and wireless computing. Candidates should have experience in building interactive systems and the ability to build robust demonstrations.

 

Positions are available immediately, with flexible starting dates before the beginning of 2015. A detailed description of the positions and about Disney Research more generally is given below. Interested candidates should send an email with an up-to-date CV and any questions to drpjobs-sp@disneyresearch.com. Please make sure to use subject line: DRP-SP-2014.

 

Postdoctoral positions:

Postdoctoral positions are for 2 years. Candidates should have an outstanding research record, have published in top-tier journals and international conferences, and have shown impact on the research in their field. Candidates must have excellent command of English and a strong collaborative and team-oriented attitude. Postdoctoral positions are for one or more of the following areas:

 

  • ·       Multimodal spoken dialogue systems
  • ·       Adult- and Child-Robot Interaction
  • ·       Sensor fusion and multimodal signal processing
  • ·       Embodied conversational agents and language-based character interaction

 

All candidates should have excellent programming skills in scripting languages and in one or more object-oriented programming language. Preferred candidates will also have strong applied machine learning skills, and experience in data collection and experiment design with human subjects.

 

Internships for PhD students:

A number of internships are available for international PhD students in one of the following areas. The positions are full-time for 4-6 months available immediately. Candidates should be enrolled in a PhD program in Computer Science, Electrical Engineering, or related discipline. Applicants must have at least one publication in a top-tier conference, have excellent written and oral communication skills, enthusiastic, self-motivated, and enjoy collaborative and team work.

We have opportunities for internships in a variety of fields including:

 

  • ·       Nonverbal signal analysis and synthesis for human-like animated characters
  • ·       Telepresence and Tele-Communication in human-humanoid interaction
  • ·       Speech recognition applications for children
  • ·       Multimodal and incremental dialogue systems
  • ·       Kinematics, biomechanics, human motion modelling, and animatronics

 

Disney Research Labs (Pittsburgh, Boston, LA, and Zurich) provide a research foundation for the many units within the Walt Disney Company. Including Walt Disney Feature Animation, Walt Disney Imagineering, Parks & Resorts, Walt Disney Studios Motion Pictures, Disney Interactive Media Group, ESPN, and Pixar Animation Studios.

 

Disney Research combines the best of academia and industry: we work on a broad range of commercially important challenges, we view publication as a principal mechanism for quality control, we encourage and help with the engagement with the global research community, and our research has applications that are experienced by millions of people.

 

Disney Research Pittsburgh is made of a group of a world-leading researchers working on a very wide range of interactive technologies. The lab is co-located with Carnegie Mellon University under the direction of Prof. Jessica Hodgins. Members of DRP are encouraged to interact with the established research community at CMU and with the business units in Los Angeles and Florida. As an active member of the research community we support and assist with publications at top venues.

 

Disney Research provides very competitive compensations, benefits, and relocation help.

 

The Walt Disney Company is an Affirmative Action / Equal Opportunity Employer and encourages applications from members of under-represented groups.

 

http://www.disneyresearch.com

Top

6-28(2014-10-11) 1 PhD/Postdoctoral position, Saarland University, Saarbruecken, Germany

1 PhD/Postdoctoral position in DFG-funded CRC “Information Density and Linguistic Encoding” (SFB 1102), Saarland University, Saarbruecken, Germany

Deadline for Applications:*Oct 31, 2014*

The DFG-funded CRC “Information Density and Linguistic Encoding” is pleased to invite applications for a PhD/post-doctoral position within the project 'B1: Information Density and Scientific Literacy in English', start date as soon as possible. If not filled until the Oct 31, later applications will be considered until the position is filled.

SFB1102, B1.A: Postdoctoral researcher or PhD student, computational linguist or computer science

A central methodological aspect of the project is to train and apply language models as well as data mining and other machine learning techniques to investigate the diachronic linguistic development of English scientific writing (from 17th century to present). The successful candidate will work on adapting, modifying and extending standard techniques from language modeling and machine learning to incorporate linguistically motivated and interpretable features.

Requirements: The successful candidate should have a PhD/Master's in Computer Science, Computational Linguistics, or related discipline with a strong background in language modeling and machine learning (especially data mining). Good programming skills and knowledge of linguistics are strong assets. Previous shared work with linguists is desirable. A good command of English is mandatory. Working knowledge of German is desirable.

The project is headed by Prof. Dr. Elke Teich, Dr. Noam Ordan and Dr. Hannah Kermes (http://fr46.uni-saarland.de/index.php?id=teich) and carried out in close collaboration with Prof. Dr. Dietrich Klakow:
http://www.lsv.uni-saarland.de/klakow.htm

For further information on the project, see
http://www.sfb1102.uni-saarland.de/b1.php

The appointments will be made on the German TV-L E13 scale (65% for PhD student, 100%
for postdoctoral researcher; see also
http://www.sfb1102.uni-saarland.de/jobs.php). Support for travel to conferences is also available. *Priority will be given to applications received by October 31, 2014*. Any inquiries concerning the post should be directed to the e-mail address below.
Complete applications  quoting “SFB1102, B1.A” in the subject line should include (1) a statement of research interests motivating why you are applying for this position, (2) a full CV with publications, (3) scans of transcripts and academic degree certificates, and (4) the names and e-mail addresses of three referees and should be e-mailed as a single PDF to:

Prof. Dr. Elke Teich
e-mail:
e.teich@mx.uni-saarland.de

Top

6-29(2014-10-13) Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences), Paris, FR

Job announcement: lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences)

* équivalent Lecturer (UK) / Assistant Professor (USA)

Paris Diderot University will open a lecturer position in English phonetics, phonology and morpho-phonology for September 2015, pending budgetary approval.

Candidates are expected to have a Ph.D. in English Linguistics with a specialization in phonetics or phonology. Candidates should either already hold a tenured lecturer position or apply for accreditation by the French Conseil National des Universités. Note that the deadline for the first step of application for CNU accreditation is October 23rd, 2014.

Candidates should have expertise in the areas of English phonetics and phonology, with research interests in the morphology-phonology or morphology-phonetics interface. Those whose record of research relates to one or more of the following areas are particularly encouraged to apply: second language acquisition, quantitative, statistical or computational methods in linguistic research, psycholinguistic experimentation, laboratory phonology, and/or sociolinguistics. A working knowledge of French is expected.

The successful candidate is expected to join the Centre de Linguistique Interlangue, de Lexicologie, de Linguistique Anglaise et de Corpus - Atelier de recherche sur la parole (CLILLAC-ARP) and the department of English :

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVR&page=LesactivitesdeCLILLACARP

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVU&page=ACCUEIL&g=m

 

Teaching responsibilities will include undergraduate courses in phonetics, phonetic variation and intonation and graduate courses in phonetics and in the areas of the appointee's specialization. Other duties include supervision of graduate students, involvement in curricular development and in the administration of the department.

 

This position is a permanent one with a civil servant status. Salary will be in accordance with the French state regulated public service salary scale.

 

Any potential candidate should bear in mind that the deadline for registration for the national CNU “qualification” is October 23rd, 2014 on the Galaxie website of the French 'Ministère de l'Enseignement Supérieur':

https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/cand_qualification.htm

The position, if opened, will start on September 1st 2015 and the websites for applications (French Ministry of Higher Education and Université Paris Diderot) will be open in spring 2015.

 

Web Address for Applications: http://www.univ-paris-diderot.fr

Contact Information:

Prof. Agnès Celle

agnes.celle@univ-paris-diderot.fr

+33 1 57 27 58 67

Top

6-30(2014-10-15) Post-doctoral position (12 months) GIPSA-lab, Grenoble, France

Post-doctoral position (12 months)  GIPSA-lab, Grenoble, France

Incremental text-tp-speech synthesis for people with communication disorders

 

Duration, location and staff

The position is open from January 2015 (until filled) for a duration of 12 months. The work will take place in

GIPSA-lab, Grenoble, France, in the context of the project SpeakRightNow.

Researchers involved: Thomas Hueber, Gérard Bailly, Laurent Girin and Mael Pouget (PhD student).

Context

SpeakRightNow project aims at developing an incremental Text-To-Speech system (iTTS) in order to

improve the user experience of people with communication disorders who use a TTS system in their daily life.

Contrary to a conventional TTS, an iTTS system aims at delivering the synthetic voice while the user is typing

(eventually with a delay of one word), and thus before the full sentence is available. By reducing the latency

between text input and speech output, iTTS should enhance the interactivity of communication. Besides, iTTS

could be chained with incremental speech recognition systems, in order to design highly responsive speech-tospeech

conversion system (for application in automatic translation, silent speech interface, real-time

enhancement of pathological voice, etc.).

The development of iTTS system is an emerging research field. Previous work mainly focused on the

online estimation of the target prosody from partial (and uncertain) syntactic structure [2], and the reactive

generation of the synthetic waveform (as in [3] for HMM-based speech synthesis). The goal of this post-doctoral

position is to propose original solutions to these questions. Depending on the his/her background, the recruited

researcher is expected to contribute on one or more of the following tasks:

1) Developing original approaches to address the problem of incremental prosody estimation using

machine learning techniques for predicting missing syntactic information and driving prosodic

models.

2) Implementing a prototype of an iTTS system on a mobile platform. The system will be adapted

from the HMM-based TTS system currently developed at GIPSA-lab for French language.

3) Evaluating the prototype in a clinical context in collaboration with the medical partners of the

SpeakRightNow project.

Keywords: assistive speech technology, incremental speech synthesis, prosody, machine learning, handicap.

Prerequisite: PhD degree in computer science, signal processing or machine learning. A background in HMMbased

speech synthesis and/or development on iOS/Android platform is a plus.

To apply: Applicants should email a CV along with a brief letter outlining their research background, a list of

two references and a copy of their two most important publications, to Thomas Hueber (thomas.hueber@gipsa-lab.

fr).

References:

[1] Baumann, T., Schlangen, D., “Evaluating prosodic processing for incremental speech synthesis,” in Proceedings of

Interspeech, Portland, USA, Sept. 2012.

[2] Astrinaki, M., d’Allessandro, N., Picart, B. Drugman, T., Dutoit, T., “Reactive and continuous control of HMM-based

speech synthesis,” in Proceedings of IEEE Workshop on Spoken Language Technology Miami, USA, Dec. 2012.

Top

6-31(2014-10-20) ​Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences), Paris, France

Lecturer* in English phonetics, phonology and morpho-phonology (Maître de conferences)

* équivalent Lecturer (UK) / Assistant Professor (USA)

Paris Diderot University will open a lecturer position in English phonetics, phonology and morpho-phonology for September 2015, pending budgetary approval.

Candidates are expected to have a Ph.D. in English Linguistics with a specialization in phonetics or phonology. Candidates should either already hold a tenured lecturer position or apply for accreditation by the French Conseil National des Universités. Note that the deadline for the first step of application for CNU accreditation is October 23rd, 2014.

Candidates should have expertise in the areas of English phonetics and phonology, with research interests in the morphology-phonology or morphology-phonetics interface. Those whose record of research relates to one or more of the following areas are particularly encouraged to apply: second language acquisition, quantitative, statistical or computational methods in linguistic research, psycholinguistic experimentation, laboratory phonology, and/or sociolinguistics. A working knowledge of French is expected.

The successful candidate is expected to join the Centre de Linguistique Interlangue, de Lexicologie, de Linguistique Anglaise et de Corpus - Atelier de recherche sur la parole (CLILLAC-ARP) and the department of English :

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVR&page=LesactivitesdeCLILLACARP

http://www.univ-paris-diderot.fr/EtudesAnglophones/pg.php?bc=CHVU&page=ACCUEIL&g=m

 

Teaching responsibilities will include undergraduate courses in phonetics, phonetic variation and intonation and graduate courses in phonetics and in the areas of the appointee's specialization. Other duties include supervision of graduate students, involvement in curricular development and in the administration of the department.

 

This position is a permanent one with a civil servant status. Salary will be in accordance with the French state regulated public service salary scale.

 

Any potential candidate should bear in mind that the deadline for registration for the national CNU “qualification” is October 23rd, 2014 on the Galaxie website of the French 'Ministère de l'Enseignement Supérieur':

https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/cand_qualification.htm

The position, if opened, will start on September 1st 2015 and the websites for applications (French Ministry of Higher Education and Université Paris Diderot) will be open in spring 2015.

 

Web Address for Applications: http://www.univ-paris-diderot.fr

Contact Information:

Prof. Agnès Celle

agnes.celle@univ-paris-diderot.fr

+33 1 57 27 58 67

Top

6-32(2014-10-25) Postdoc positions in multimodal video recommendation, Aix-Marseille University, France
Postdoc positions at Aix-Marseille University in multimodal video recommendation
 
Application deadline: 11/31/2014
 
Description:
 
The ADNVIDEO project, funded in the framework of A*MIDEX (http://amidex.univ-amu.fr/en/home), aims at extending multimodal analysis models. It focuses on jointly processing the audio stream, the speech transcript, the image flow, scenes, the characterization of text overlays and user feedback.
 
Using as starting point the corpus, annotations and approaches developed during the REPERE challenge (http://defi-repere.fr), this project aims at going beyond indexing on a single modality by incorporating information retrieval methods, not only from broadcast television, but more generally on video documents requiring multimodal scene analysis. The novelty here is to combine and correlate information from different sources to enhance the qualification of content. The application for this project relates to the issue of recommendation applied to videos: given a video, the system finds documents (text, image, and video) related to this video, either on the surface level or the meaning level. In particular, the use case considered may have, in terms of technology transfer, significant economic benefits, regarding automatic ad targeting: automatically find the most relevant advertising with respect to the content of a video.
 
Objectives:
 
The candidate will participate in the development of a prototype for videos recommendation, leading to  a technology transfer towards business:
 
* Extraction of multimodal low-level descriptors. These descriptors correspond to speech, image and sound / music.
* Extraction of multimodal high-level descriptors. These semantic-oriented descriptors are extracted from low-level descriptors.
* Aggregation of multimodal descriptors to form the multimodal footprint of the video.
* Matching videos and promotional material.
* Validation of the video recommendation prototype.
* Participation to the scientific life of the lab, including paper publication.
 
The allocation of tasks can be performed depending on the skills of the candidate.
 
Skills:
 
For this project, we are looking for two candidates with a PhD degree in the areas of information retrieval, natural language processing, machine learning or video analysis:
 
* Strong programming skills (C++, Java, Python?).
* Desire to produce functioning end-to-end systems, life-scale live demos
* Scientific rigor
* Imagination
* Top notch publications
* Excellent communication skills
* Enjoy teamwork
 
Candidates must presently work outside of France.
 
Location:
 
University of Aix-Marseille, LIF (http://www.lif.univ-mrs.fr) and LSIS laboratories (http://www.lsis.org) and the company Kalyzee (http://www.kalyzee.com).
 
Contact:
 
 
Candidates should email a letter of application, a detailed CV including a complete list of publications, and source code showcasing programming skills.
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA