ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #241  »  Jobs

ISCApad #241

Tuesday, July 10, 2018 by Chris Wellekens

6 Jobs
6-1(2018-01-09) Two postdoc positions at IDIAP, Martigny, Switzerland

===========================================
position 1:
===========================================
Name :    Multimodal people monitoring using sound (and vision)
Type :    Postdoc
Description :    The Idiap Research Institute together with Swiss Center for Electronics and Microtechnology (CSEM) invite applications for a post-doctoral position in research and development for multimodal people monitoring.
The position is funded for one year by Idiap (with a possible extension depending on his/her performance)

The successful candidate will work with Dr. Petr Motlicek in Idiap's Speech and Audio Processing group, engaged in world-class research in speech processing.
Exceptionally qualified candidate can also be considered for a longer-term Research Associate position.

Detailed description:
We have witnessed a large interest and potential of self-dependent smart sound devices to be deployed for security, surveillance, or emergency applications. Recent developments performed by CSEM in building occupancy detection and monitoring using embedded vision have led to the creation of successful monitoring applications. This project will focus on a combination of the visual and speech information which will take place in an embedded platform providing industrial grade vision sensing together with an acoustic front-end.

CSEM will provide an expertise in embedded platform, visual analysis and data fusion. The Idiap postdoctoral position will mainly focus on the speech related aspects of the project, including speaker identification and keyword spotting, aiming to operate with limited resources.

We envisage three related research threads for this position:
1. Parameter reduction, in which we will apply sparsity and relevance constraints to train neural networks that function using as few parameters as possible.
2. Acoustic modeling sharing between different applications, in which we will build on the commonality between technologies for automatic speech recognition or keyword spotting and speaker recognition to create a single system with multiple capabilities.
3. Far-field speech processing, in which we will process signals recorded by a microphone array to substantially increase SNR of the input signal.

The successful candidate will work at Idiap in Martigny, but in close collaboration with CSEM?s R&D team based in Switzerland.
The project is a unique combination of applied science and academic research expected to yield both reference designs and academic publications.

Profile:
Candidate should have either or both of:
1. A strong background in engineering, mathematics or a related discipline, along with the associated familiarity with modern distributed programming environments and languages such as C++, Python and Perl.
2. An exceptional academic record and a clear aptitude for creative (and independent) research in a related discipline.
In either case, familiarity with speech processing tools such as Kaldi and deep learning toolkits such as Torch will be a distinct advantage. Although a PhD is normally a prerequisite for a post-doctoral position, candidates without a PhD may be considered in exceptional cases.

Timescale:
The position is offered on a one-year basis with the possibility of renewal based on funding and performance.
The starting salary will be 80,000 CHF/year. Starting date could be immediate, but otherwise as soon as possible in 2018.


===========================================
position 2:
===========================================

Name :    Speech and Speaker recognition for HMI devices
Type :    Postdoc
Description :    The Idiap Research Institute together with a global industry partner, leader in Consumer Electronics, invite applications for two post-doctoral positions in speech and speaker recognition for HMI devices. The positions are funded for two years by the Swiss Commission for Technology and Innovation (CTI), enabling a collaboration between Idiap and an innovative product company.

The successful candidates will work with Dr. Philip N. Garner, and/or Dr. Petr Motlicek in Idiap's Speech and Audio Processing group, engaged in world-class research in speech processing. Exceptionally qualified candidates can also be considered for a longer-term Research Associate position.

Description

In recent years, the state of the art in speech and speaker recognition has been dominated by deep learning. Such technology is typically highly parametric; training can require significant CPU or GPU resources. The goal of the project is to investigate the application of the state of the art to the more limited resources of consumer-grade embedded systems which operate in combination with cloud services.

We envisage three related research threads:

1. Parameter reduction, in which we will apply sparsity and relevance constraints to train networks that function using as few parameters as possible.

2. Smart handover, in which we will assess the complexity of voice commands to optimise workload between local devices and cloud-based services.

3. System combination, in which we will build on the commonality between technologies for multilingual speech recognition, keyword spotting and speaker recognition to create a single system with multiple capabilities.

The successful candidates will work at Idiap in Martigny, but in close collaboration with the partner?s R&D team based in Switzerland. The project is a unique combination of applied science and academic research expected to yield both reference designs and academic publications.

Profile

Candidates should have either or both of:
1. A strong background in engineering, mathematics or a related discipline, along with the associated familiarity with modern distributed programming environments and languages such as C++, Python and Perl.
2. An exceptional academic record and a clear aptitude for creative (and independent) research in a related discipline.
In either case, familiarity with speech processing tools such as Kaldi and deep learning toolkits such as Torch will be a distinct advantage. Although a PhD is normally a prerequisite for a post-doctoral position, candidates without a PhD may be considered in exceptional cases.

Timescale

All positions are offered on a one-year basis with the possibility of renewal based on funding and performance. The starting salary will be 80,000 CHF/year. Starting date could be immediate, but otherwise as soon as possible in 2018.

Back  Top

6-2(2018-01-10) Postdocs at Monash University, Melbourne, Australia

The Faculty of Information Technology (https://www.monash.edu/it) at Monash University in Melbourne Australia is establishing a new group in HCI and creative technologies. We invite accomplished and creative PhDs to apply for a 3-year postdoctoral fellowship in multimodal interfaces and behavior analytics. The selected candidate will join a rapidly expanding multidisciplinary group with expertise in areas such as mobile and multimodal-multisensor interfaces, agent-based conversational interfaces, brain-computer and adaptive interfaces, wearable and contextually-aware personalized interfaces, education and health interfaces, data analytics for predicting user cognition and health status, and other topics.

We are especially interested in adding faculty in these preferred areas:    

(1) Wearable, contextually-aware and personalized interfaces

(2) Mobile and multimodal-multisensor interfaces, including fusion-based ones 

(3) Data analytics for predicting user emotion, cognition, and health status

(4) Agent-based conversational dialogue interfaces 

(5) Brain-computer and adaptive interfaces


This position involves research on predicting user cognition and health status, based on analysis of different modalities (e.g., speech, writing, images, sensors) during naturally occurring activities. These analyses involve exploring predictive patterns at the signal, activity pattern, lexical, and/or transactional levels. The ideal candidate would be an initiating researcher with a strong publication record who is interested in pioneering in emerging research areas. He/she would have an interest in developing new technologies to identify users? cognitive and health status, and using this information to develop personalized and adaptive interfaces that promote learning, performance, and health.

Requirements:

?    PhD in computer science, engineering, information sciences, cognitive or linguistic sciences, or related field

?    Training in HCI, multimodal interfaces, data science and analytics, modeling human behavior & communication

?    Experience collecting and analyzing speech, images, handwriting, and/or other sensor data

?    Experience applying machine learning/deep learning, empirical/statistical, linguistic, or hybrid analysis methods

?    Interest in human cognition and educational technologies, and/or health and mental health technologies

?    Strong interpersonal, teamwork, communication and writing skills

?    Ability to work with diverse partners? domain experts (teachers, clinicians), industry, undergraduate/graduate students

?    Prefer candidate with 2-3 years post-PhD research or work experience            

HCI Group: The HCI group designs, builds, and evaluates state-of-the-art interface technologies. Our multidisciplinary interests span computer science and engineering, cognitive and learning sciences, communications, health, media design, and other topics. We are interested in applications such as health, education, communications, personal assistance, and digital arts. The HCI group has partnerships with CSIRO-Data61 and industry. The HCI area director is Dr. Sharon Oviatt, an ACM Fellow and international pioneer in human-centered, mobile, and multimodal interfaces (see https://www.monash.edu/it/our-research/graduate-research/scholarship-funded-phd-research-projects/projects/human-centred-mobile-and-multimodal-interfaces)

Monash is Australia?s largest university, and ranks in the top 60 universities worldwide, with Computer and Information Systems rated in the top 70 worldwide (QS World University rankings 2018). In addition to growing rapidly in human-centered computing, software, and cyber-security, it includes data science and machine learning, artificial intelligence and robotics, computational biology, social computing, and basic computer science.

Experimental Labs & Design Spaces: The university has made recent strategic investments in facilities for prototyping innovative concepts, collecting and analyzing data, and displaying digital installations and interactive media?including sensiLab (supporting tangible, wearable, augmented and virtual reality, multimodal-multimedia, maker-space), Immersive Visualization platform and Analytics lab, the Centre for Data Science, and the ARC Centre of Excellence on Integrative Brain function (pioneering new multimodal imaging techniques for data exploration). The university currently is investing in HCI group facilities for prototyping and developing new mobile, multimodal and multisensor interfaces, analyzing human multimodal interaction (e.g., whole-body activity, speech), and predicting users? cognitive and health status.

Melbourne Area: Melbourne recently has been rated the #1 city worldwide for quality of life (see Economist & Guardian, http://www.economist.com/blogs/graphicdetail/2016/08/daily-chart-14 and https://www.theguardian.com/australia-news/2016/aug/18/melbourne-wins-worlds-most-liveable-city-award-sixth-year-in-a-row), with excellent education, healthcare, infrastructure, low crime, and exceptional cuisine, cultural activities, and creative design. The regional area is renowned for its dramatic coastline, extensive parks, exotic wildlife, and Yarra Valley wine region.

Position & Compensation: This position is full-time for 3 years, with competitive salary (Academic level B-6, $119,683 AUD) and benefits, including 17% superannuation retirement fund, health insurance options, relocation, and seed funds for equipment and travel. Start date is negotiable after April 1, 2018. For enquiries, contact Oviatt@incaadesigns.org.

To apply: To submit an online application: http://careers.pageuppeople.com/513/cw/en/job/571150/research-fellow-multimodal-interfaces-behaviour-analytics  Required application materials include: (1) cover letter (indicating date of availability); (2) current CV with publication list, research and teaching interests, and 3 references with email/phone contact; (3) graduate transcripts; and (4) three representative publications. Monash has a Women in IT Program, and participates in the Athena Swan Charter to enhance gender equality. We welcome female, minority and international applicants.

Back  Top

6-3(2018-01-11) Postdoc position at IDIAP, Martigny, Switzerland

We have a new opening for a post-doctoral researcher at Idiap Research Institute.  It is
a joint position with the Swiss Center for Electronics and Microtechnology (CSEM), and
involves investigation of deep learning in the context of speech and image processing.
For a full description, please see the link:
 http://www.idiap.ch/education-and-jobs/job-10236

Idiap is located in French speaking Switzerland, although the lab hosts many
nationalities, and functions in English.  All positions offer quite generous salaries.

Several similar positions at PhD, post-doc and senior level are available at the
institute in general.
 http://www.idiap.ch/en/join-us/job-opportunities

Back  Top

6-4(2018-01-18) (SENIOR) SPEECH SCIENTIST at Voicebox, München, Germany

(SENIOR) SPEECH SCIENTIST                                          
Voicebox is an acknowledged pioneer in the voice technology and application industry. Our patented innovations create compelling AI voice interfaces of unparalleled flexibility and usability in app dev. Trusted by many of the world’s leading companies, we have established a leadership position in the automotive market. And because our capabilities were developed against our vision of a single, unifying interface across all connected devices, our continued growth is being driven by new markets, such as connected TVs and mobile computing. Our ability to capture this growth requires that we continue to add to our diverse team of talented professionals. Our opportunity is your opportunity. 
As Speech Scientist you will work closely with the Cloud ASR team on cutting-edge conversational voice information retrieval systems.  Within this team of researchers and engineers you will improve existing products and develop brand new technologies. You will be responsible for the design, development and testing of large vocabulary speech application technologies and also engage in all aspects of research activities (e.g., writing proposals, conducting novel research, presenting and publishing your research results).
Typical work packages are: • Training and adaptation of acoustic models (car, mobile, far-field) for a multitude of languages • Develop statistical language models  for various applications and languages • Design and develop new speech applications • Tune and maintain speech applications • Adapt speech resources for certain customers’ requirements • Research, development and implementation of new algorithms in ASR. • Perform statistical analysis on large datasets (multiple terabytes of data)
Skills: • Highly independent and capable of fulfilling multiple project commitments concurrently • Passionate about analyzing data to solve problems and improve systems • Good analytical and diagnostic skills, quick learning • Coding skills in languages like in C/C++ or Java • Knowledge of multiple natural languages is a strong plus • Knowledge in digital signal processing • Be able to write technical specifications, requirements in English • Good English communication skills
Experience: • Prior experience training models for HMM/DNN-based recognizers (Kaldi/HTK) • Understanding of ASR training/decoding processes • Understanding of ASR front-end components • Knowhow of far-field and microphone array processing • Experience in UNIX environment, strong in scripting languages such as Bash, Perl and Python • Ph.D.  or Master in Computer Science, Electrical Engineering, Mathematics, or relevant field or strong professional experience.
 
VoiceBox Technologies Deutschland GmbH  Ramersdorfer Straße 1, 81669 München, Germany michaelw@voicebox.com

Back  Top

6-5(2018-01-18) 3 permanent(indefinite tenure) faculty positions at Telecom ParisTech, Paris, France

Telecom ParisTech has three new permanent(indefinite tenure) faculty positions:

? Faculty position (Full Professor) in audio/speech/music signal processing
? Faculty position (Associate Professor) in deep learning applied to
temporal data analysis
? Faculty position (Associate Professor) in deep learning for image processing

More information on:
http://www.tsi.telecom-paristech.fr/en/blog/2018/01/05/three-new-permanent-tenure-faculty-positions/

Please note the following important dates:

? February 28th, 2018: closing date
? Mid-April 2018: hearings of preselected candidates (tentative dates
for hearings are April 11th and/or 13th)
? September 2018: tentative starting date

Back  Top

6-6(2018-01-20) Machine learning Software Engineer, Adobe Research - Speech Recognition, San Jose, CA,USA

Machine learning Software Engineer, Adobe Research - Speech Recognition
 
Description Creative Intelligence Lab within Adobe Research plays a key role in creating next-generation applications and features in Adobe’s flagship products, including Photoshop, Lightroom, Audition, and Acrobat. Creative Intelligence Lab is searching for a machine learning software engineer specializing in speech recognition. Responsibilities include working closely with researchers, engineers, user experience designers, and product managers to build prototypes that showcase new research technologies, and to help integrate those technologies into Adobe’s products. This position will initially focus on building a speech recognition system for our creative assistant, and may expand to include additional areas of innovation, such as text to speech, NLP, HCI, machine learning, and dialog systems. We’re looking for exceptional candidates with expertise in computer science or software engineering. For this position, we will give preference to candidates with experience in applied machine learning and speech recognition. For successful candidates, nearly all of the following will be true: You have significant experience in building robust, complex software systems. You are excellent at collaborating with a team to get work done. You are comfortable both building prototypes from scratch and writing maintainable code inside large existing codebases. You have shipped software in a commercial environment (start-ups a plus) and can deal with last-minute bug fixes and schedule changes. 
 
Requirements  M.S. degree or higher in Computer Science or a related field. Significant experience developing speech recognition systems (acoustic models, language models) using popular machine learning and deep learning software libraries, such as Kaldi and/or Tensorflow. Ability to write efficient, clean, and reusable code. Strong communication and collaboration skills. Ability and willingness to learn new technologies quickly. At Adobe, you will be immersed in an exceptional work environment that is recognized throughout the world on Best Companies lists. You will also be surrounded by colleagues who are committed to helping each other grow through our unique Check-In approach where ongoing feedback flows freely. If you’re looking to make an impact, Adobe's the place for you. Discover what our employees are saying about their career experiences on the Adobe Life blog and explore the meaningful benefits we offer. Adobe is an equal opportunity employer. We welcome and encourage diversity in the workplace regardless of race, gender, sexual orientation, gender identity, disability or veteran status.
 Contact: Trung Bui (bui@adobe.com) or Walter Chang (wachang@adobe.com)

Back  Top

6-7(2018-01-25) 2018 PHD RESEARCH FELLOWSHIPS at University of Trento , Italy

2018 PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech)
Location: University of Trento , Italy

You may have enjoyed reading about bots, artificial intelligence, machine learning,
digital assistants, systems that support doctors, teachers, customers and help people.
Then, you would like to consider taking a front row seat and join the research team
that has been training intelligent machines and evaluating AI-based systems
for more than two decades, collaborating with best research labs in the world and
deployed them in the real-world.

Here is a sample of the projects ( http://sisl.disi.unitn.it/demo/ ) the Signals
and Interactive Systems Lab (University of Trento, Italy) has been leading:

-Natural Language Understanding systems for massive amount of human language data:
http://www.sensei-conversation.eu

-Amazon Alexa challenge on Conversational Systems:
http://sisl.disi.unitn.it/university-of-trento-is-selected-by-amazon-for-the-alexa-challenge/

-Designing AI personal agents for healthcare domain:
http://sisl.disi.unitn.it/pha/

We are looking for top-candidates for its funded PhD research fellowships.
Candidates should have background at least in one of the following areas:


- Speech Processing

- Natural Language Understanding

- Conversational Systems

- Machine Learning

Candidates will be working on research domains such as Conversational Agents,
Intelligent Systems, Speech/Text Document Mining and Summarization,
Human Behavior Understanding, Crowd Computing and AI-based systems for tutoring.


For more info on research and projects visit the lab website
Visit lab website at http://sisl.disi.unitn.it/

The SIS Lab research is driven by an interdisciplinary approach to research,
attracting researchers from  disciplines such as Digital Signal Processing,
Speech Processing, Computational Linguistics, Psychology, Neuroscience and
Machine Learning.

The  official language ( research and teaching ) of the department is English.

FELLOWSHIP

Gross amount of the fellowships ( internship and PhD ) is competitive and approximately
1.600 Euro/month.
Students may qualify for reduced campus lodging, transportation and cafeteria
reduced rates.

For more information about cost of living, campus, graduate education programs,
please visit the graduate school website at http://ict.unitn.it/

DEADLINES

Immediate openings with start date as early as March 2018.  Open until filled.

REQUIREMENTS

Strict requirement is at least Master level degree in Computer Science, Electrical
Engineering, Computational Linguistics or similar or affine disciplines. Students with
other background
(Physics, Applied Math) may apply as well. Background in at least one of the posted
research areas is required. All applicants should have good very programming, math skills
and used to team work.

HOW TO APPLY

Interested applicants should send their
1) CV
2) Statement of research interest and
3) three reference letters sent to:

Email: sisl-jobs@disi.unitn.it


For more info:

Signals and Interactive Systems Lab : http://sisl.disi.unitn.it/

PhD School : http://ict.unitn.it/

Department : http://disi.unitn.it/


Information Engineering and Computer Science Department (DISI)

DISI has a strong focus on cross-disciplinarity with professors from different
faculties of the University (Physical Science, Electrical Engineering, Economics,
Social Science, Cognitive Science, Computer Science) with international
background. DISI aims at exploiting the complementary experiences present in the
various research areas in order to develop innovative methods, technologies and
applications.

University of Trento

The University of Trento is consistently ranked as premiere Italian university
institution.
See http://www.unitn.it/en/node/1636/mid/2573

University of Trento is an equal opportunity employer.

Back  Top

6-8(2018-01-25) PhD student in Robot-assisted Language Learning at KTH Royal Institute of Technology, Stockholm, Sweden
PhD student in Robot-assisted Language Learning at KTH Royal Institute of Technology, Stockholm, Sweden
Ending January 31st 2018.
 
 
Olov Engwall
Professor in Speech Communication

 

Back  Top

6-9(2018-01-26) Poste MCF à l'ENSIMAG, Grenoble, France
Poste MCF à l'ENSIMAG.
 
 
 
Ecole de rattachement : ENSIMAG
Site web de l?école : http://ensimag.grenoble-inp.fr/
Profil d?enseignement :
L?Ensimag recrute un maître de conférences en mathématiques appliquées ou en informatique
afin de développer les enseignements d?apprentissage statistique, d?intelligence artificielle, de
visualisation de données, de calcul haute performance ou de « big data ». Le dossier de
candidature devra faire apparaître le caractère ?interdisciplinaire? du candidat, sa capacité à
prendre des responsabilités au sein de la structure, ainsi qu?une liste conséquente de travaux
ou publications en relation avec une ou plusieurs branches de la science des données. Outre la
formation aux sciences des données (synthèse de programmes à partir de données, aide à la
décision), la personne recrutée devra s?investir dans les enseignements du tronc commun
Ensimag (1ère année et environ 75% des filières de la 2ème année) qui constitue le socle de nos
élèves ingénieurs. Elle sera amenée à s'investir et prendre des responsabilités dans des
parcours de l?Ecole tels que le « mastère big data » ou le master « Data Science ». En
partenariat avec des industriels, la personne recrutée pourrait superviser l?organisation de «
challenges » et de « hackatons » afin d?enrichir les contacts de l?Ecole dans le domaine de
l?intelligence artificielle et des « big data ». En collaboration avec les équipes pédagogiques
concernées, elle devra s?impliquer dans le montage d?enseignements par projets et la
formation par le Numérique.
 
RECHERCHE
Laboratoire d?accueil : LIG / LJK
Site web du laboratoire : http://www.liglab.fr/
Contact du laboratoire : Eric Gaussier (eric.gaussier@imag.fr), LIG
Stéphane Labbé (stephane.labbe@imag.fr), LJK
Profil de recherche :
 
Le candidat effectuera ses recherches dans le domaine de l?intelligence artificielle ou de la
science des données, et montrera son ouverture aux différentes approches possibles dans ce
domaine. Les thématiques privilégiées sont l?apprentissage sur données complexes, structurées
ou non structurées, l?apprentissage profond et les réseaux de neurones et en particulier les
problématiques d?optimisation, de causalité, de capacité de généralisation et leur analyse
mathématique. Parmi les applications de l?apprentissage et de l?apprentissage profond, un
intérêt particulier est porté au traitement du signal et de l?image, à l?apprentissage de
représentation, à l?apprentissage avec des données multimédia, des données langagières pour
des problématiques issues du traitement du langage naturel, les thématiques de transparence
des mécanismes d?apprentissage, ainsi que les applications en biologie, santé, sciences
humaines, réseaux sociaux, physique, environnement, etc.
Le recrutement renforcera les liens entre le LIG et le LJK dans les domaines de la science des
données et de l?apprentissage automatique. Les deux laboratoires sont localisés sur le campus
de Saint Martin d?Hères et ont des collaborations actives, en particulier au sein de l?axe du
traitement de données et de connaissance à large échelle (équipes AMA, GETALP, MRIM,
SLIDE), des équipes PERVASIVE, TYREX du LIG et au sein du département Proba-Stat (équipes
DAO, SVH, MISTIS, FIGAL) et de l?équipe THOTH du LJK. Parmi les projets communs entre les
deux laboratoires, on peut également citer les problèmes de prédiction et de classification avec
des données structurées de type fonctionnelles, le transport optimal pour l?apprentissage, les
problèmes de parcimonie et de régularisation pour l?apprentissage multitâches et leur
résolution par des méthodes d?optimisation stochastique. La personne recrutée montrera sa
capacité à jouer un rôle actif dans les projets contractuels académiques (ANR, FUI, PFIA, EU...)
et industriels sur ces thèmes très porteurs.
 
ACTIVITES ADMINISTRATIVES
Spécificités du poste ou contraintes particulières :
Activités administratives liées aux fonctions de maître de conférences : responsabilités d?unité
d?enseignement, responsabilités de filières ou d?année.
Compétences attendues :
Savoir : Enseignement de l?informatique, de l?intelligence artificielle et de la science
des données
Savoir-faire : Pédagogie et responsabilités dans l?Ecole
Savoir-être : Travail en équipe
 
pdf
Intelligence artificielle, Science des données, Big data, Apprentissage
 

------------------------
Laurent Besacier
Professeur à l'Univ. Grenoble Alpes (UGA)
Laboratoire d'Informatique de Grenoble (LIG)
Membre Junior de l'Institut Universitaire de France (IUF 2012-2017)
Responsable équipe GETALP du LIG
Directeur de l'école doctorale (ED) MSTII
-------------------------
!! Nouvelles coordonnées !!: LIG 
Laboratoire d'Informatique de Grenoble
Bâtiment IMAG
700 avenue Centrale
Domaine Universitaire - 38401 St Martin d'Hères
Pour tout contact concernant ED MSTII: passer par ed-mstii@univ-grenoble-alpes.fr
Nouveau tel: 0457421454
--------------------------
 
Back  Top

6-10(2018-01-27) Post-doctoral researcher at Idiap Research Institute, Martigny, Switzerland

Post-doctoral researcher at Idiap Research  Institute. 

It is a joint position with the Swiss Center for Electronics and
Microtechnology (CSEM), and involves investigation of deep learning in the
context of speech and image processing.
For a full description, please see the link:
 http://www.idiap.ch/education-and-jobs/job-10236

Idiap is located in French speaking Switzerland, although the lab hosts many
nationalities, and functions in English.  All positions offer quite generous
salaries.

Several similar positions at PhD, post-doc and senior level are available at
the institute in general.
 http://www.idiap.ch/en/join-us/job-opportunities

Back  Top

6-11(2018-01-27) Post Doctoral Position (12 months) at INRIA Nancy, France

Post Doctoral Position (12 months)

Natural language processing: automatic speech recognition system using deep neural networks without out-of-vocabulary words

_______________________________________

- Location:INRIA Nancy Grand Est research center, France

 

- Research theme: PERCEPTION, COGNITION, INTERACTION

 

- Project-team: Multispeech

 

- Scientific Context:

 

More and more audio/video appear on Internet each day. About 300 hours of multimedia are uploaded per minute. In these multimedia sources, audio data represents a very important part. If these documents are not transcribed, automatic content retrieval is difficult or impossible. The classical approach for spoken content retrieval from audio documents is an automatic speech recognition followed by text retrieval.

 

An automatic speech recognition system (ASR) uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. New Proper Names (PNs) appear constantly, requiring dynamic updates of the lexicons used by the ASR. These PNs evolve over time and no vocabulary will ever contains all existing PNs. When a person searches for a document, proper names are used in the query. If these PNs have not been recognized, the document cannot be found. These missing PNs can be very important for the understanding of the document.

 

In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is how to model relevant proper names for the audio document we want to transcribe.

 

- Missions:

 

We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs). The purpose of this work is to design a methodology how to find and model a list of relevant OOV PNs that correspond to an audio document.

 

Assuming that we have an approximate transcription of the audio document and huge text corpus extracted from internet, several methodologies could be studied:

  • From the approximate OOV pronunciation in the transcription, generate the possible writings of the word (phoneme to character conversion) and search this word in the text corpus.

  • A deep neural network can be designed to predict OOV proper names and their pronunciations with the training objective to maximize the retrieval of relevant OOV proper names.

 

The proposed approaches will be validated using the ASR developed in our team.

 

Keywords: deep neural networks, automatic speech recognition, lexicon, out-of-vocabulary words.

 

- Bibliography

[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.

[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.

[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.

[Li2017] J. Li, G. Ye, R. Zhao, J. Droppo, Y. Gong , ?Acoustic-to-Word Model without OOV?, ASRU, 2017.

 

 

- Skills and profile: PhD in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python).

- Additional information:

 

Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/

 

Additional links : Ecole Doctorale IAEM Lorraine

 

Deadline to apply: June 6th

Selection results: end of June

 

Duration :12 of months.

Starting date: between Nov. 1st 2018 and Jan. 1st 2019
Salary: about 2.115 euros net, medical insurance included

 

The candidates must have defended their PhD later than Sept. 1st 2016 and before the end of 2018. 

The candidates are required to provide the following documents in a single pdf or ZIP file: 

  • CV including a description of your research activities (2 pages max) and a short description of what you consider to be your best contributions and why (1 page max and 3 contributions max); the contributions could be theoretical or  practical. Web links to the contributions should be provided. Include also a brief description of your scientific and career projects, and your scientific positioning regarding the proposed subject.

  • The report(s) from your PhD external reviewer(s), if applicable.

  • If you haven't defended yet, the list of expected members of your PhD committee (if known) and the expected date of defence.

In addition, at least one recommendation letter from the PhD advisor should be sent directly by their author(s) to the prospective postdoc advisor.

 

Help and benefits:

 

  • Possibility of free French courses

  • Help for finding housing

  • Help for the resident card procedure and for husband/wife visa

Back  Top

6-12(2018-01-27) PhD grant Natural language processing: adding new words to a speech recognition system using Deep Neural Networks, INRIA/LORIA, Nancy, France

Natural language processing: adding new words to a speech recognition system using Deep Neural Networks

 

- Location: INRIA/LORIA Nancy Grand Est research center France

 

- Research theme:Perception, Cognition, Interaction

 

- Project-team: Multispeech

 

- Scientific Context:

 

Voice is seen as the next big field for computer interaction. The research company Gartner reckons that by 2018, 30% of all interactions with devices will be voice-based: people can speak up to four times faster than they can type, and the technology behind voice interaction is improving all the time.

As of October 2017, Amazon Echo is present in about 4% of American households. Voice assistants are proliferating in smartphones too: Apple?s Siri handles over 2 billion commands a week, and 20% of Google searches on Android-powered handsets in America are done by voice input.

The proper nouns (PNs) play a particular role: they are often important to understand a message and can vary enormously. For example, a voice assistant should know the names of all your friends; a search engine should know the names of all famous people and places, names of museums, etc.

An automatic speech recognition system uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. It is impossible to add all possible proper names because there are millions proper names and new ones appear every day. A competitive solution is to dynamically add new PNs into the ASR system. The idea is to add only relevant proper names: for instance if we want to transcribe a video document about football results, we should add the names of famous football players and not politicians.

In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is to find relevant proper names for the audio document we want to transcribe. To select the relevant proper names, we propose to use an artificial neural network.

 

- Missions:

 

We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs)

The goal of this PhDThesis is to find a list of relevant OOV PNs that correspond to an audio document and to integrate them in the speech recognition system. We will use a Deep neural network to find relevant OOV PNs The input of the DNN will be the approximate transcription of the audio document and the output will be the list of relevant OOV PNs with their probabilities. The retrieved proper names will be added to the lexicon and a new recognition of the audio document will be performed.

 

During the thesis, the student will investigate methodologies based on deep neural networks [Deng2013]. The candidate will study different structures of DNN and different representation of documents [Mikolov2013]. The student will validate the proposed approaches using the automatic transcription system of radio broadcast developed in our team.

 

- Bibliography:

 

[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.

 

[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.

 

[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.

 

 

 

- Skills and profile: Master in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python).

- Additional information:

 

Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/

Additional links: Ecole Doctorale IAEM Lorraine

 

Duration: 3 years

Starting date: between Oct. 1st 2018 and Jan. 1st 2019

 

Deadline to apply : May 1st 2018

 

The candidates are required to provide the following documents in a single pdf or ZIP file: 

  • CV

  • A cover/motivation letter describing their interest in the topic 

  • Degree certificates and transcripts for Bachelor and Master (or the last 5 years)

  • Master thesis (or equivalent) if it is already completed, or a description of the work in progress, otherwise

  • The publications (or web links) of the candidate, if any (it is not expected that they have any)

In addition, one recommendation letter from the person who supervises(d) the Master thesis (or research project or internship) should be sent directly by his/her author to the prospective PhD advisor.

Back  Top

6-13(2018-02-01) Junior Linguist (French), Paris, France

Junior Linguist [French]

 

Job Title:

Junior Linguist [French]

Linguistic Field(s):

Phonetics, Phonology, Morphology, Semantics, Syntax, Lexicography, NLP

Location:

Paris, France

Job description:

The role of the Junior Linguist is to annotate and review linguistic data in French. The Junior Linguist will also contribute to a number of other tasks to improve natural language processing. The tasks include:

  • Providing phonetic/phonemic transcription of lexicon entries

  • Analyzing acoustic data to evaluate speech synthesis

  • Annotating and reviewing linguistic data

  • Labeling text for disambiguation, expansion, and text normalization

  • Annotating lexicon entries according to guidelines

  • Evaluating current system outputs

  • Deriving NLP data for new and on-going projects

  • Be able to work independently with confidence and little oversight

Minimum Requirements:

  • Native speaker of French and fluent in English

  • Extensive knowledge of phonetic/phonemic transcriptions

  • Familiarity with TTS tools and techniques

  • Experience in annotation work

  • Knowledge of phonetics, phonology, semantics, syntax, morphology or lexicography

  • Excellent oral and written communication skills

  • Attention to detail and good organizational skills

Desired Skills:

  • Degree in Linguistics or Computational Linguistics or Speech processing

  • Ability to quickly grasp technical concepts; learn in-house tools

  • Keen interest in technology and computer-literate

  • Listening Skills

  • Fast and Accurate Keyboard Typing Skills

  • Familiarity with Transcription Software

  • Editing, Grammar Check and Proofing Skills

  • Research Skills

 

CV + motivation letter : maroussia.houimli@adeccooutsourcing.fr

Back  Top

6-14(2018-02-16) Postdoctoral Research Scientist: Computational Linguistics, Rochester, NY, USA

Postdoctoral Research Scientist: Computational Linguistics

We invite applications for an interdisciplinary postdoctoral position with specialization in computational linguistics and/or technical or scientific methods in language science at Rochester Institute of Technology (RIT), in Rochester, NY. This is a one-year position with opportunity for renewal. The applicant should demonstrate a fit with our commitment to collaborate with colleagues across the university on research initiatives in Personalized Healthcare Technology. In addition to engaging in research projects, the right candidate will be able to teach a total of two courses per year - one course each in the College of Liberal Arts and the Golisano College of Computing and Information Sciences at RIT. The teaching assignment may be Computer Science Principles, Introduction to Language Science, Language Technology, Introduction to Natural Language Processing, Science and Analytics of Speech (acoustic and experimental phonetics), Spoken Language Processing (automatic speech recognition and text-to-speech synthesis), Seminar in Computational Linguistics, or another course depending on background.

 

Required Minimum Qualifications

- PhD., with training in Computational Linguistics, Linguistics, or an allied field

- Advanced graduate coursework in computational linguistics (natural language processing or speech processing), linguistics, or language science broadly

- Publication record and plan for research and grant seeking activities

- Ability to contribute in meaningful ways to our commitment to cultural diversity, pluralism, and individual differences

 

Required Application Documents

Cover Letter, Curriculum Vitae or Resume, List of References, Research Statement

 

How To Apply

Please apply at: http://careers.rit.edu/staff. Click the link for search openings and in the keyword search field, enter the title of the position or 3599BR.

Back  Top

6-15(2018-02-18) Postdoctoral Research Associate (PDRA) at University of Kent, UK


A Postdoctoral Research Associate (PDRA) position is available in English Language and Linguistics at the University of Kent. We are looking for an enthusiastic candidate to join for our interdisciplinary team working on a project funded by the Leverhulme Trust - ?Does Language Have Groove? Sensorimotor Synchronisation for the Study of Linguistic Rhythm?. The project brings together expertise from phonetics, cognitive and movement sciences, and involves a collaboration between the University of Kent (UK) and the University of Montreal (Canada), in particular the International Laboratory for Brain, Music and Sound Research (BRAMS). The post holder will be based at Kent in Canterbury, with opportunities to travel to collaborative meetings and conferences.


This project aims to resolve current controversies surrounding rhythmic properties of language. Using paradigms based on sensorimotor synchronisation and rhythmic movement to cross-linguistic materials, we will develop new approaches to the study of linguistic rhythm. The PDRA will have strong computational skills and expertise in data analyses and modelling. S/he will be responsible for collecting and analysing tapping and perception data, for disseminating the project?s findings to academic and non-academic audiences, and writing up the results for publication.


The successful candidate must have a PhD in Phonetics, Experimental Psychology, Cognitive Science, Computing, Movement Sciences or related disciplines. Excellent programming skills in MATLAB/R environments as well as expertise in experimental design and statistical methods (e.g., multivariate statistics, linear mixed models, time-series analyses) are required. A publication record (commensurate with the applicant?s career stage), knowledge of non-linear modelling advanced techniques and experience working with different speech perception paradigms are highly desirable. The candidate should enjoy working in an interdisciplinary environment and be interested in working with speakers from typologically diverse language backgrounds. Language knowledge in addition to English (French, Italian and/or non-European languages) are a plus.


The required application documents include (1) a cover letter outlining the candidate?s background and suitability for the post and (2) a CV with a list of publications and contact details of three academic referees who are available to provide a reference letter for the shortlisted candidates prior to the interview date.
?    Start date for applications: 24 January 2018
?    Closing date for applications: 26 February 2018
?    Interviews are to be held: 29 March 2018
?    Start date of the post: 1 May 2018
Please use this link to view the full job description and also to apply for this post. If you require further information regarding the application process please contact The Resourcing Team on jobs@kent.ac.uk quoting ref number: HUM0834. For informal enquiries about the post, please contact Dr Tamara Rathcke (t.v.rathcke@kent.ac.uk) +44 1227 826540.

 

Dr Tamara V. Rathcke
Lecturer in Linguistics
University of Kent
Tel:  +44 (0)1227 826540
Back  Top

6-16(2018-02-19)Ph D at Loria/Inria and Telecom Paris Rech
Nous proposons un sujet de thèse de doctorat sur le rehaussement de la parole par apprentissage profond au Loria/Inria Nancy Grand-EST et au LTCI/Télécom ParisTech.
 
 
Clôture de l'appel à candidature le 30 avril 2018.
Back  Top

6-17(2018-02-19) Post-Doctoral Researcher at Paderborn University, Germany


 The department of Communications Engineering at Paderborn University, Faculty of Electrical Engineering, Informatics and Mathematics, offers a position as  
Post-Doctoral Researcher (pay scale 13 TV-L)
at full time employment (100 %). The position is according to the German Wissenschaftszeitvertragsgesetz (WissZeitVG) and aims for scientific qualification in the area of project management and original research. It is temporary for 1.5 years, which is considered suitable for the qualification aim. An extension is, in principle, possible.
The project: Your work will be concerned with signal processing and machine learning for wireless acoustic sensor networks. The research is carried out in the context of a multidisciplinary and multi-site collaborative research initiative. 
 
About us: We are a highly motivated research group working on cutting-edge techniques for robust acquisition and processing of speech and audio, such as enhancement, beamforming, source separation, recognition and acoustic scene understanding. We have strong links to both national and international research groups. .
About you:  You hold a PhD in Electrical Engineering, Computer Science or a closely related field  You gained thorough and extensive knowledge in the fields of signal processing and machine learning for audio or speech  You have a strong publication record demonstrating innovative research achievements  You have excellent programming skills
 
Applications of women are particularly welcome and, in case of equal qualifications and experiences, will receive preferential treatment according to the North RhineWestphalian Equal Opportunities Act (LGG), unless there are preponderant reasons to give preference to another applicant. Part-time employment is, in principle, possible. Applications from disabled people with appropriate suitability are explicitly welcome. This also applies to people with equal opportunities in accordance with the German social law SGB IX.
Information about the Department of Communications Engineering can be found at http://ei.uni-paderborn.de/nt/ 
Please send your application including letter of motivation, research profile, CV, certificates, list of publications, and contact data for reference letters by mail citing the  reference  number 3291, not later than 31 March 2018, to: 
Prof. Dr. Reinhold Häb-Umbach Fachgebiet Nachrichtentechnik Universität Paderborn Warburger Str. 100 33098 Paderborn GERMANY (haeb@nt.uni-paderborn.de)

Back  Top

6-18(2018-02-20) 1 (W/M) researcher positions at IRCAM, Paris, France


 
Position:  1 (W/M) researcher positions at IRCAM    Starting:    April 1st, 2018 Duration:    12 months

Deadline for application: March, 15th, 2018 
 
 Position description 201802UMGRES:
 IRCAM is looking for a researcher for the development of music content analysis technologies (such as tempo, chord, structure, auto-tagging) in the framework of a technology transfer with Universal Music Group.
 
Required profile: • Very high Skill in audio signal processing (spectral analysis, audio-feature extraction, parameter estimation) (the candidate should preferably hold a PHD in this field) • Very high skill in machine learning (SVM, ConvNet) (the candidate should preferably hold a PHD in this field)  • High skill in distributed computing • High-skill in Matlab and Python programming, skills in C/C++ programming  • Good knowledge of UNIX environments (GNU-Linux ou MacOSX)  • High productivity, methodical works, excellent programming style.
 
The hired Researchers will also collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports).  
 
 Introduction to IRCAM:
 IRCAM is a leading non-profit organization associated to Centre Pompidou, dedicated to music production, R&D and education in sound and music technologies. It hosts composers, researchers and students from many countries cooperating in contemporary music production, scientific and applied research. The main topics addressed in its R&D department include acoustics, audio signal processing, computer music, interaction technologies and musicology. Ircam is located in the centre of Paris near the Centre Pompidou, at 1, Place Igor Stravinsky 75004 Paris. 
 
  
 
Salary: According to background and experience 
 
 
Applications: Please send an application letter with the reference 201802UMGRES together with your resume and any suitable information addressing the above issues preferably by email to: peeters at ircam dot fr with cc to vinet at ircam dot fr. 
 
 
 

Back  Top

6-19(2018-02-22) Postdoctoral Researcher at Saarland University, Germany

Postdoctoral Researcher

(computational linguistics or computer science)



Models of Intercomprehension in Speech and Language

The Language Science and Technology department at Saarland University seeks to fill a postdoctoral position. Applications are invited from individuals with research expertise in any field related to speech science and speech technology. Our research project is concerned with the analysis of cross-lingual mutual intelligibility between Slavic languages. It studies the auditory-perceptual intercomprehension of Slavic languages based on analyses of the acoustic, phonetic and phonological structure of spoken utterances. This line of investigation will be complemented by using adaptation techniques established in speech synthesis and recognition to measure the distance between languages. In addition, similarity will be determined on the level of complete utterances.

 

The postdoc will join a vibrant community of speech and language researchers at Saarland University whose expertise spans areas such as computational linguistics, psycholinguistics, language and speech technology, speech science, theoretical and corpus linguistics, computer science, and psychology.

 

Requirements: The successful candidate should have a Ph.D./Master's in Computer Science, Computational Linguistics, or a related discipline, with a strong background in speech science and speech technology, in particular TTS and ASR. Strong programming skills are essential. A good command of English is mandatory. Working knowledge of German is desirable but not a prerequisite. Candidates must have completed their Ph.D. by the time of the appointment.

 

The position is a full position (100%) on the German E13 scale and subject to the final approval by the funding agency. Starting dates can be between July and October, 2018. The appointment will be for between one and four years.

 

About the department: The department of Language Science and Technology is one of the leading departments in the speech and language area in Europe. The flagship project at the moment is the CRC on Information Density and Linguistic Encoding. Furthermore, the department is involved in the cluster of excellence Multimodal Computing and Interaction. It also runs a significant number of European and nationally funded projects. In total it has seven faculty and around 50 postdoctoral researchers and PhD students.

 

How to apply: Please send us: (1) a letter of motivation, (2) your CV, (3) your transcripts, (4) a list of publications, and (5) the names and contact information of at least two references, as a single PDF or a link to a PDF if the file size is more than 3 MB.

 

Please apply by April 3rd, 2018.

 

Contacts: If you are interested in the project, please send an email to Bernd Möbius (moebius@coli.uni-saarland.de) and Dietrich Klakow (dietrich.klakow@lsv.uni-saarland.de).

Back  Top

6-20(2018-03-02) POSTDOCTORAL FELLOW POSITION, CNRS and INSERM, Lyon, France

POSTDOCTORAL FELLOW POSITION

 

Applications are invited for a 12-month full-time (with possible 12-month extension) Postdoctoral Position in cognitive neuroscience in Lyon, to collect and analyze fMRI data on language processing. The post-doc is part of an exciting new project, which is a collaboration between Drs Alice Roy and Véronique Boulenger from the Laboratory Dynamics of Language (CNRS), and Dr Claudio Brozzoli from the Lyon Neuroscience Research Centre (INSERM).

 

The project lies in the context of embodied cognition theories and aims at uncovering the functional role of the motor system in second language processing. It will examine, using fMRI, the dynamics of cortical activation in motor regions before and after phonological training in a foreign language.

The project will be conducted in Lyon, a vibrant and stimulating neuroscience environment and a culturally rich city life, ideally located just an hour away from the Alpes, 2 hours from Paris and an hour and a half from Marseille and the Mediterranean sea (by train).

 

Key requirements for the candidates:

The ideal candidate will have a PhD in neuroscience, cognitive sciences or a related field and will have substantial experience in fMRI imaging analyses (e.g. SPM, connectivity analysis, resting state) and good programming skills (MATLAB). A background in speech and language is required.

 

Applications in the form of a cover letter with statement of research interests and a CV with full publication list should be sent by email to alice.roy@cnrs.fr and veronique.boulenger@cnrs.fr, with cc to claudio.brozzoli@inserm.fr.

 

Applicants from outside the European Union are welcome but they must qualify for a valid visa. French speaking is not a requirement (although it is an asset) as long as the English language is mastered.

Starting date: 2018 ? please contact us for further information.

Net salary: ~2000 ? / month

Applications will be considered until the position is filled.

 

Please feel free to forward this announcement to colleagues and students who could be interested in this position.

http://www.ddl.ish-lyon.cnrs.fr/equipes/index.asp?Langue=EN&Equipe=7&Page=Presentation&

 

--

Véronique Boulenger

Chargée de Recherche CNRS

Laboratoire Dynamique Du Langage

UMR5596 CNRS/Université de Lyon

04.72.72.79.24

veronique.boulenger@cnrs.fr

 

Back  Top

6-21(2018-03-03) Maitre de conférences, ENSIMAG, Grenoble,France
Ecole de rattachement : ENSIMAG
Site web de l?école : http://ensimag.grenoble-inp.fr/
Profil d?enseignement :
L?Ensimag recrute un maître de conférences en mathématiques appliquées ou en informatique
afin de développer les enseignements d?apprentissage statistique, d?intelligence artificielle, de
visualisation de données, de calcul haute performance ou de « big data ». Le dossier de
candidature devra faire apparaître le caractère ?interdisciplinaire? du candidat, sa capacité à
prendre des responsabilités au sein de la structure, ainsi qu?une liste conséquente de travaux
ou publications en relation avec une ou plusieurs branches de la science des données. Outre la
formation aux sciences des données (synthèse de programmes à partir de données, aide à la
décision), la personne recrutée devra s?investir dans les enseignements du tronc commun
Ensimag (1ère année et environ 75% des filières de la 2ème année) qui constitue le socle de nos
élèves ingénieurs. Elle sera amenée à s'investir et prendre des responsabilités dans des
parcours de l?Ecole tels que le « mastère big data » ou le master « Data Science ». En
partenariat avec des industriels, la personne recrutée pourrait superviser l?organisation de «
challenges » et de « hackatons » afin d?enrichir les contacts de l?Ecole dans le domaine de
l?intelligence artificielle et des « big data ». En collaboration avec les équipes pédagogiques
concernées, elle devra s?impliquer dans le montage d?enseignements par projets et la
formation par le Numérique.
 
RECHERCHE
Laboratoire d?accueil : LIG / LJK
Site web du laboratoire : http://www.liglab.fr/
Contact du laboratoire : Eric Gaussier (eric.gaussier@imag.fr), LIG
Stéphane Labbé (stephane.labbe@imag.fr), LJK
Profil de recherche :
 
Le candidat effectuera ses recherches dans le domaine de l?intelligence artificielle ou de la
science des données, et montrera son ouverture aux différentes approches possibles dans ce
domaine. Les thématiques privilégiées sont l?apprentissage sur données complexes, structurées
ou non structurées, l?apprentissage profond et les réseaux de neurones et en particulier les
problématiques d?optimisation, de causalité, de capacité de généralisation et leur analyse
mathématique. Parmi les applications de l?apprentissage et de l?apprentissage profond, un
intérêt particulier est porté au traitement du signal et de l?image, à l?apprentissage de
représentation, à l?apprentissage avec des données multimédia, des données langagières pour
des problématiques issues du traitement du langage naturel, les thématiques de transparence
des mécanismes d?apprentissage, ainsi que les applications en biologie, santé, sciences
humaines, réseaux sociaux, physique, environnement, etc.
Le recrutement renforcera les liens entre le LIG et le LJK dans les domaines de la science des
données et de l?apprentissage automatique. Les deux laboratoires sont localisés sur le campus
de Saint Martin d?Hères et ont des collaborations actives, en particulier au sein de l?axe du
traitement de données et de connaissance à large échelle (équipes AMA, GETALP, MRIM,
SLIDE), des équipes PERVASIVE, TYREX du LIG et au sein du département Proba-Stat (équipes
DAO, SVH, MISTIS, FIGAL) et de l?équipe THOTH du LJK. Parmi les projets communs entre les
deux laboratoires, on peut également citer les problèmes de prédiction et de classification avec
des données structurées de type fonctionnelles, le transport optimal pour l?apprentissage, les
problèmes de parcimonie et de régularisation pour l?apprentissage multitâches et leur
résolution par des méthodes d?optimisation stochastique. La personne recrutée montrera sa
capacité à jouer un rôle actif dans les projets contractuels académiques (ANR, FUI, PFIA, EU...)
et industriels sur ces thèmes très porteurs.
 
ACTIVITES ADMINISTRATIVES
Spécificités du poste ou contraintes particulières :
Activités administratives liées aux fonctions de maître de conférences : responsabilités d?unité
d?enseignement, responsabilités de filières ou d?année.
Compétences attendues :
Savoir : Enseignement de l?informatique, de l?intelligence artificielle et de la science
des données
Savoir-faire : Pédagogie et responsabilités dans l?Ecole
Savoir-être : Travail en équipe
 
pdf
Intelligence artificielle, Science des données, Big data, Apprentissage
Back  Top

6-22(2018-03-06) Post Doc position at University of Saarland, Germany

Post Doc position (computer science, computational linguistics, physics or similar)
Integrating Models of Vision, Knowledge and Language
 
Statistical models of natural language so far have only considered the preceding words as a context. We would now like to generalize this to include also knowledge of from images or data bases. To this end, suitable neural network architectures will be explored and their properties analysed. Alternative machine learning based approaches will also be considered.
 
Requirements: The successful candidate should have a Master's degree in Computer Science, Computational Linguistics, Physics or a related discipline, with a strong background in mathematics and programming. A good command of English is mandatory as English is the working language of the department. 
 
The position is full time on the German E13 scale and subject to the final approval by the funding agency. Starting dates can be between July and October, 2018. The appointment will be for up to four years. 
 
About the department: The department of Language Science and Technology is one of the leading departments in the speech and language area in Europe.  The flagship project at the moment is the CRC on Information Density and Linguistic Encoding. Furthermore, the department is involved in the cluster of excellence Multimodal Computing and Interaction. It also runs a significant number of European and nationally funded projects. In total it has seven faculty and around 50 postdoctoral researchers and PhD students.
 
How to apply:  Please send us  a letter of motivation,  your CV,  your transcripts,  a list of publications,  and the names and contact information of at least two references, as a single PDF or a link to a PDF if the file size is more than 3 MB.
 
Please apply by April 10th, 2018.
 
Contact: If you are interested in the project, please send an email to Dietrich Klakow (dietrich.klakow@lsv.uni-saarland.de)

Back  Top

6-23(2018-03-14) PhD Position in Experimental Mechanics of Materials/Structures (vocal-fold 3D structure), Gipsa Lab, Grenoble

PhD Position in Experimental Mechanics of Materials/Structures
“From the vocal-fold 3D structure and micro-mechanics to the design of biomimetic materials”
 
Location : 3SR Lab, CoMHet team, Grenoble, France lucie.bailly@3sr-grenoble.fr, laurent.orgeas@3sr-grenoble.fr, sabine.rollandduroscoat@3sr-grenoble.fr
 
Collaboration : GIPSA-lab, VSLD team, Grenoble, France Nathalie.Henrich@gipsa-lab.fr
 
Project summary
The vocal folds are soft multi-layered laryngeal tissues, owning remarkable vibro-mechanical performances. Composed of collagen and elastin microfibrils’ networks, the upper layers play a major role in the vocal-fold vibrations. However, the impact of these tissues’ histological features on their mechanical behavior is still poorly known. This is mainly ascribed to their challenging experimental characterization at the scale of their fibrous networks. 
 
Therefore, this PhD project aims to gain an in-depth understanding of the link between the micromechanics of vocal-fold tissues and their unique vibratory macroscale performances. The strategy will be : 1. To go further in the investigation of the vocal-fold 3D architecture and micromechanics and behaviour upon finite deformation. This step will be based on experimental biomechanical campaigns and unprecedented synchrotron X-ray in situ microtomography ; 2. To use these data to mimic and process fibrous biomaterials  with tailored structural and biomechanical properties ; 3. To characterize the vibro-mechanical properties of these biomaterials at different scales (macro/micro) and frequencies (low/high). 
Location and practical aspects
The successful applicant will be hosted by the laboratory Soils, Solids, Structures, Risks (3SR, UMR5521 - Grenoble, France - www.3sr-grenoble.fr/) in the “CoMHet” team. A part of his/her work will also be conducted in the Images, Speech, Signal and Automation Laboratory (GIPSA-lab, UMR5216 - Grenoble, France - www.gipsalab.grenoble-inp.fr/). This project will benefit from a collaboration existing between researchers in mechanical engineering, voice production and clinicians from Grenoble University Hospital (LADAF).
The PhD fellowship offer is available from September 2018 (possible adjustments of this starting date if need be) for a period of 3 years (financial support acquired from ANR MicroVoice project). 
Applications Candidates with academic backgrounds in solid mechanics, materials science and engineering are expected. Specific skills in dynamics of composites, vibromechanics, and experimental mechanics will be appreciated. Additional knowledge in acoustics and/or biomechanics of soft tissues will be interestingly examined. 
Interested candidates should send their CV, a cover letter and official transcripts of the last two years before 2018, April the 30th to Lucie BAILLY, lucie.bailly@3sr-grenoble.fr, (+33) (0)4 76 82 70 85.

Back  Top

6-24(2018-03-15) Internship at ELDA, Paris, France

Nous recherchons un-e stagiaire dans le cadre d'un projet ayant pour but l'actualisation
d?un inventaire de ressources linguistiques pour les langues régionales françaises, ainsi
que la négociation des droits pour permettre leur partage avec la communauté des
technologies de la langue.

Les tâches consisteront principalement en:

?    La mise à jour de l?inventaire de ressources linguistiques existant
?    L'étude technique et juridique des conditions de partage actuelles de ces ressources
(analyse des formats d?exploitation des ressources et identification des droits
d?utilisation en coopération avec un expert juridique en interne)
?    La négociation avec les fournisseurs, la définition des conditions de réutilisation
des ressources linguistiques, l?établissement de contrats de distribution,
?    La description et l?intégration des ressources disponibles dans le catalogue ELRA
?    La rédaction d?un rapport final

Profil:
?    Niveau master 2 traitement automatiques des langues ou domaines assimilés
?    Durée : 6 mois
?    Aptitude à travailler tant de façon indépendante qu?au sein d?une équipe
?    Forte aptitude rédactionnelle et analytique
?    Convention de stage requise

Toute candidature sera étudiée jusqu?à ce que le poste soit pourvu. Le poste est basé à
Paris.

Salaire : en fonction des qualifications et expériences.

Les candidatures doivent être adressées par courriel (lettre de motivation et curriculum
vitae) à:
ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Email : job@elda.org

ELDA, unité opérationnelle de l?association ELRA, est chargée de promouvoir le
développement de ressources linguistiques sous toutes les formes électroniques
utilisables, en particulier sous la forme de corpus oraux et écrits, de lexiques et de
bases terminologiques. Depuis sa création en 1995, ELDA s?est affirmée comme un centre
unique en Europe pour la distribution de ressources linguistiques, capable de répondre
aux divers besoins des développeurs de technologie. Ses activités se développent
maintenant vers de nouveaux types de ressources linguistiques (données de type
multimodal/multimédia). Certaines ressources linguistiques sont conçues au cours de
projets (co-)financés par ELDA. Celles-ci sont ensuite compilées sous la forme d?un
catalogue de ressources linguistiques. ELDA est impliquée dans un certain nombre de
projets européens et nationaux. ELDA s?intéresse également aux problèmes juridiques en
rapport avec les ressources linguistiques, réalise régulièrement des études de marché sur
les besoins des utilisateurs, et travaille à l?amélioration des procédures de validation
des ressources.

Pour de plus amples renseignements concernant ELRA/ELDA, voir : http://www.elra.info ou
www.elda.org

Back  Top

6-25(2018-03-19) Faculty position (Associate professor) at Telecom ParisTech, Paris

Telecom ParisTech has one new permanent (indefinite tenure) faculty position (Associate Professor) in machine learning. Applicants from machine learning for speech processing, natural language processing or affective computing are welcomed.

More information on the social computing topic is avalable here : https://www.tsi.telecom-paristech.fr/en/research/1885-2/social-computing-topic/ 


********************************************************

Faculty position (Associate professor) at Telecom ParisTech in

Machine-Learning.

 

 

Important Dates

?    May 25th, 2018: closing date

?    Mid June: hearings of preselected candidates

 

Telecom ParisTech?s [1] machine learning, statistics and signal processing group (a.k.a S²A group) [2], within the laboratoire de traitement et communication de l?information (LTCI) [5], is inviting applications for a permanent (indefinite tenure) faculty position at the *Associate Professor* level (Maitre de Conferences) in *Machine learning*.

 

Main missions

 

The recruit will be expected to:

 

Research activities

?    Develop groundbreaking research in the field of theoretical or applied machine learning,   targeting applications that are well aligned with the topics of the S²A group [3] and the Images, Data & Signals department [4], which include (and is not restricted to) time series analysis (audio, ?), reinforcement learning, natural language processing, social signal processing, predictive maintenance, biomedical or physiological signal analysis, recommendation, finance, health, ?. 

?    Develop both academic and industrial collaborations on the same topic, including collaborative activities with other Telecom ParisTech research departments and teams, and research contracts with industrial players

?    Set up research grants and take part in national and international collaborative research projects

   

Teaching activities

?    Participate in teaching activities at Telecom ParisTech and its partner academic institutions (as part of joint Master programs), especially in machine learning and Data science, including life-long training programs (e.g. the local Data Scientist certificate)

 

Impact

?    Publish high quality research work in leading journals and conferences

?    Be an active member of the research community (serving in scientific committees and boards, organizing seminars, workshops, special sessions...)

 

 

Candidate profile

 

As a minimum requirement, the successful candidate will have:

 

?    A PhD degree

?    A track record of research and publication in one or more of the following areas: machine learning, applied mathematics, signal processing,

?    Experience in teaching

?    Good command of English

 

The ideal candidate will also (optionally) have:

?    Experience in temporal data analysis problems (sequence prediction, multivariate time series, probabilistic graphical models, recurrent neural networks...)

 

NOTE:

The candidate does *not* need to speak French to apply, just to be willing to learn the language (teaching will be mostly given in English)

   

Other skills expected include:

?    Capacity to work in a team and develop good relationships with colleagues and peers

?    Good writing and pedagogical skills

 

More about the position

?    Place of work: Paris until 2019, then Saclay (Paris outskirts)

?    For more information about being an Associate Professor at Telecom ParisTech, check [6] (in French)

 

 How to apply

Applications are to be sent by e-mail to: recrutement@telecom-paristech.fr

 

The application should include:

?    A complete and detailed curriculum vitae

?    A letter of motivation

?    A document detailing past activities of the candidate in teaching and research: the two types of activities will be described with the same level of detail and rigor.

?    The texts of the main publications

?    The names and addresses of two referees

?    A short teaching project and a research project (maximum 3 pages)

 

Contacts :

Slim Essid (Coordinator of the ADASP team)

Florence d?Alché-Buc (Professor, Machine Learning)

Stéphan Clémençon (Head of the S²A group)

Gaël Richard (Head of the IDS department)

 

 

[1] http://www.tsi.telecom-paristech.fr

[2] http://www.tsi.telecom-paristech.fr/ssa/

[3] http://www.tsi.telecom-paristech.fr/aao/en/ 

 

Back  Top

6-26(2018-03-24) PhD-student in a research project investigating strategies for human?robot-interaction, Bielefeld, Germany

The Social Cognitive Systems group (headed by Prof. Dr. Stefan Kopp; Cluster of Excellence Cognitive Interaction Technology, Bielefeld University, Germany) is currently looking for a PhD-student in a research project investigating strategies for human?robot-interaction, with a focus on generation of multimodal spoken dialogue behaviour.

 

Applicants should have a masters degree in computer science or (computational) linguistics with a focus on machine learning, statistical methods in natural language processing, and/or dialogue modelling and should have strong communication skills and be motivated to work in an interdisciplinary team of computer scientists, psychologists, engineers, and designers.

 

The position is fully paid (TV-L 13) with funding for three years. The official job advertisement (in German) can be found here: https://scs.techfak.uni-bielefeld.de/scswp/wordpress/wp-content/uploads/2018/03/wiss18072.pdf The deadline for applications to receive full consideration is 2018-04-06.

 

If you have any questions or want to know more about the research project, our research group, or living and working in Bielefeld, don't hesitate to contact Stefan Kopp <skopp@techfak.uni-bielefeld.de>.

 

--

Hendrik Buschmeier

Social Cognitive Systems Group, CITEC, Bielefeld University

https://purl.org/net/hbuschme

Back  Top

6-27(2018-03-26) PhD grant in Machine learning, Lannion, France
L?équipe Expression de l?IRISA propose une thèse en Informatique co-financée par la DGA sur le sujet suivant « Machine learning models for multimodal detection of anomalous behaviors ». 
 
La description du sujet est disponible à cet emplacement :
 
 
Profil des candidats : Les candidat(e)s doivent être titulaire d'un Master recherche en informatique. Ils doivent également posséder un bon niveau de développement (C/C++/Python/?) ainsi que des connaissances en apprentissage automatique et si possible en traitement du signal. La DGA impose que les candidats posséder la nationalité d?un pays membre de l?Europe. Un excellent niveau en anglais est requis.
Date limite de candidature : 10 avril 2018
Localisation : Lannion
Back  Top

6-28(2018-03-15) PhD grant at LJK and LIG, Grenoble, France
CDP TITLE: Performance Laboratory
 
SUBJECT TITLE: Computational Video Editing for Stage Performances
 
SCIENTIFIC DEPARTMENT (LABORATORY’S NAME): LJK+LIG
 
DOCTORAL SCHOOL’S: MSTII (Mathématiques appliquées et informatique)
 
SUPPORTER’S NAME: Rémi Ronfard & Benjamin Lecouteux
 
The PERFORMANCE LABORATORY cross-fertilises UGA’s performing arts, geography-urban studies
and computer science communities to produce innovative performance as research. This new
interdisciplinary community of 41 academics will allow the development of cutting edge art research, digital
documentation, performance literacy tools and innovative forms of material and immaterial heritage. This
will push the very boundaries of the scientific disciplines themselves, both methodologically and
epistemologically, and in turn, create a new pluridisciplinary ecosystem at CUGA.
 
SUBJECT DESCRIPTION:
Context : This PhD thesis is proposed as part of an ongoing collaboration between computer scientists and
performings arts researchers at Univ. Grenoble Alpes and INRIA to use video in teaching and researching
the performing arts. In a previous project, the IMAGINE team at LJK and INRIA developped methods for
automatic generation of cinematic rushes from ultra high definition video recordings of stage performances
[1]. Here, we would like to propose techniques for making documentary movies from the generated rushes,
based on an analysis of the script of the performance and a formalization of the rules of film editing. Ideally,
the proposed techniques should be completely non-invasive (not requiring sensors on actors or on stage)
and intuitive enough to be used by performing arts students, professors and researchers, without any
expertise in video production.
Description: The goal of the PhD thesis will be to propose novel interaction techniques to students,
professors and researchers in the performing arts for making movies from stage performances recorded on
stage. On the one hand, we will propose novel algorithms for editing cinematographic rushes together into
movie clips automatically, based on computational models of film editing « idioms » and machine analysis
of the actors speech and motion. On the other hand, we will propose novel user interfaces for easily
choosing between available idioms as in [2] and creating new idioms for the specific purpose of teaching
and researching mise en scene and acting techniques.
During his/her thesis, the PhD student will create an extensive database of stage performance recordings,
as part of a collaboration with the performing arts department at Univ. Grenoble Alpes and associated
theatre companies. The raw recordings and the generated movies will be used as supporting material for
teaching mise- en-scène and acting techniques, and for researching multiple aspects of expressive human
motion, verbal and non-verbal communication, and dramaturgic techniques, as part of the new crossdisciplinary
research project « Performance Lab ».
 
References:
[1] Vineet Gandhi, Rémi Ronfard, Michael Gleicher. Multi-Clip Video Editing from a Single Viewpoint.
CVMP 2014 - European Conference on Visual Media Production, Nov 2014.
[2] Mackenzie Leake, Abe Davis, Anh Truong, and Maneesh Agrawala. Computational video editing for
dialogue-driven scenes. ACM Trans. Graph. 36, 4, July 2017.
 
ELIGIBILITY CRITERIA
Applicants:
- must hold a Master's degree (or be about to earn one) or have a university degree equivalent to a
European Master's (5-year duration),
Applicants will have to send an application letter in English and attach:
- Their last diploma
- Their CV
- A short presentation of their scientific project (2 to 3 pages max)
- Letters of recommendation are welcome.
 
SELECTION PROCESS
Application deadline: May 15th 2018 at 17:00 (CET)
Applications will be evaluated through a three-step process:
1. Eligibility check of applications in May 17th 2018
2. 1st round of selection: the applications will be evaluated by a Review Board and results will be May
25th.
3. 2nd round of selection: shortlisted candidates will be invited for an interview session in Grenoble on
May 31st 2018 (if necessary).
4. Final decision will be given June 30.
TYPE of CONTRACT: temporary-3 years of doctoral contract
JOB STATUS: Full time
HOURS PER WEEK: 35
OFFER STARTING DATE: October 1 2018
APPLICATION DEADLINE: May 15th 2018
 
Salary: between 1768.55 € and 2100 € (gross) per month (depending on complementary activity or not)
Back  Top

6-29(2018-03-16) Research Linguist at ObEN, Inc, Pasadena, California, USA

RESEARCH LINGUIST
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As a Research Linguist, you will collaborate with other scientists who are experts in speech engineering, natural language processing, and computer vision. You will be working on a variety of tasks to improve technologies for speech synthesis, speech recognition, visual speech, and natural language processing. 
 
The tasks include: ● Design material and procedures to collect spoken and written language data; ● Design schemas and label/tag sets to annotate recordings and text with phonetic, prosodic, semantic, and syntactic features; ● Design methods and protocols to ensure the quality of linguistic data and annotations; ● Design perceptual or linguistic tests to evaluate the performance of speech and language systems; ● Contribute to the formalization of speech and language models by offering linguistic knowledge, identifying issues and providing solutions. 
 
Basic qualifications: ● Masters or higher degree in Linguistics or a closely-related field ● Specialization in Phonetics or Phonology ● Native or near-native proficiency in Japanese or Korean ● Ability to use programming scripts
 
Preferred qualifications: ● Knowledge of scripting languages, e.g., Python ● Background in Psychology/Psycholinguistics ● Willingness to accept reprioritization as necessary
 
 
Contact: pierre@oben.com

Back  Top

6-30(2018-03-16) SPEECH RESEARCH SCIENTIST (ASR) at ObEN, Inc, Pasadena, California,USA

SPEECH RESEARCH SCIENTIST (ASR)
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As an ASR Research Scientist, you’ll be working on developing tools to automate speech data               acquisition and selection from diverse sources of data for the training of ObEN’s speech              technology components.
 
Responsibilities:
 
● Develop and extend ObEN’s proprietary ASR systems for different languages (English, Chinese, Korean, Japanese), in view of improving the robustness against environmental and channel distortion; ● Develop long (>1h) speech-text alignment systems; ● Develop lyrics-singing voice alignment systems; ● Develop tools and measures for data selection (confidence scores, acoustic measures); ● Develop tools for metadata extraction from speech and text (e.g: emotion, speakerID, etc).
 
Requirements: ● PhD with strong research experience in ASR demonstrated by publications in top Speech Journals and Conferences (ICASSP, Interspeech, ASRU, etc.); ● Experience with robust ASR,  long speech-text alignment, lightly supervised approaches and confidence measures computation; ● Fluent in Python and C++, excellent knowledge of Kaldi; ● Strong machine learning background and familiar with standard statistical modeling techniques applied to speech; ● Good knowledge of deep learning packages (Tensorflow, Theano, Keras, etc).
 
 
Contact: pierre@oben.com

Back  Top

6-31(2018-03-16) SPEECH RESEARCH SCIENTIST (Prosody Modeling)at ObEN Inc.,Pasadena, California, USA

SPEECH RESEARCH SCIENTIST (Prosody Modeling)
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As a Speech Research Scientist focusing on Prosody Modeling, you will be working on developing new prosody models for different languages (Chinese, English, Japanese, Korean) to improve the naturalness and the similarity of the synthesized voice and to allow a better control of its expressivity. 
 
Responsibilities: ● Develop new prosody model for different languages, adaptable using a small amount of data; ● Develop generic prosodic models for different expressivity which can be applied to any voice; ● Develop sentiment analysis algorithms to control expressivity from text input.   Requirements: ● PhD with strong experience in Prosody Modeling for Speech Synthesis demonstrated by publications in top Speech Journals and Conferences (Speech prosody, Icassp, Interspeech, etc); ● Strong implementation skills and general knowledge in ML; ● Fluent in Python and C++, and good knowledge of deep learning packages; ● Familiarity with linguistic phonetics; ● Knowledge of basic digital signal processing techniques for audio.
 
 
Contact: pierre@oben.com

Back  Top

6-32(2018-03-16) SPEECH RESEARCH SCIENTIST (Singing Voice Synthesis) at ObEN Inc., Pasadena, California, USA

SPEECH RESEARCH SCIENTIST (Singing Voice Synthesis)
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As a Speech Research Scientist focusing on singing voice generation, you’ll be working on improving the overall quality and control of ObEN’s virtual singing technology.
 
Responsibilities: ● Develop and improve ObEN’s virtual singing voice technology based on novel voice model with improved glottal source modelingl; ● Explore new approaches for singing voice generation based on deep generative models; ● Develop singing voice generation approach from musical annotation.
 
Requirements: ● PhD with strong experience in speech synthesis, preferably singing voice synthesis demonstrated by publications in top Speech journals and conferences (Icassp, Interspeech, etc); ● Good experience in deep generative models and sequential modelling; ● Strong implementation skills and knowledge in ML; ● Fluent in Python and C++, and good knowledge of deep learning packages; ● Familiarity with linguistic phonetics; ● Knowledge of basic digital signal processing techniques for audio.
 
 
Contact: pierre@oben.com 
 

Back  Top

6-33(2018-03-16) SPEECH RESEARCH SCIENTIST (Speech Synthesis) at ObEN Inc., Pasadena, California, USA

SPEECH RESEARCH SCIENTIST (Speech Synthesis)
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As a Speech Research Scientist focusing on Speech Synthesis, you’ll be working on improving ObEN’s speech synthesis technology. This will include the improvement of our current voice model and the development of new speech generation approaches based on deep generative models.
 
Responsibilities: ● Develop and extend ObEN’s glottal source model, in view of improving the quality, flexibility and control (e.g. voice quality, expressivity) of ObEN’s speech and singing voice synthesis system; ● Develop new speech generation approaches based on deep generative models (e.g. wavenet) with reduced amount of data and better control.
 
Requirements: ● PhD with strong experience in Speech Synthesis demonstrated by publications in top Speech Journals and Conferences (Icassp, Interspeech, etc); ● Expertise in signal processing in particular in the design of voice models (glottal source model, ...) allowing a fine control of the characteristics of the synthesized voice (speech and singing voice); ● Experience in deep generative model of raw audio (wavenet) and Generative Adversarial Network (WGAN); ● Fluent in Python and C++, and good knowledge of deep learning packages (TensorFlow, Theano, Keras, etc); ● Familiarity with linguistic phonetics; ● Knowledge of basic digital signal processing techniques for audio.
 
 
Contact: pierre@oben.com

Back  Top

6-34(2018-03-16) SPEECH RESEARCH SCIENTIST (TTS) at ObEN Inc., Pasadena, California, USA

SPEECH RESEARCH SCIENTIST (TTS)
 
Come join us and build Personal Artificial Intelligence (PAI) -- intelligent 3D avatars that look, sound, and behave like the individual user!
 ObEN is an artificial intelligence company developing a decentralized AI platform for Personal AI (PAI). Founded in 2014, ObEN is a K11, Tencent, Softbank Ventures Korea and HTC Vive X portfolio company.
 
As a Speech Research Scientist focusing on Text-to-Speech, you will be working on developing cutting-edge deep learning algorithms for voice personalization. This will include the development of structured acoustic models for synthesis allowing the control of factors such as voice timbre, voice quality, language, accent, expressiveness and speaking style and the adaptation/conversion towards a target voice using a reduced amount of data.
 
Responsibilities:
 
● Develop and extend ObEN’s proprietary TTS system, in view of improving the quality and the naturalness of the synthesized voice as well as the similarity to the target voice and reducing the amount of data for speaker adaptation; ● Develop deep generative model of raw speech waveform; ● Develop cross-lingual  approaches (e.g. phonetic posteriorgrams).
 
Requirements: ● PhD with strong research experience in Adaptation of DNN-based TTS systems demonstrated by publications in top Speech journals and conferences (Icassp, Interspeech, etc); ● Strong machine learning background and familiar with standard statistical modeling techniques applied to speech; ● Research experience in deep generative model of raw audio (wavenet) and Generative Adversarial Network (WGAN); ● Fluent in Python and C++, and expert knowledge of deep learning packages (TensorFlow, Theano, Keras, etc); ● Familiarity with linguistic phonetics; ● Knowledge of basic digital signal processing techniques for audio.
 Contact: pierre@oben.com

Back  Top

6-35(2018-03-17) 2 PhD grants and 2 postdoc positions (2-year), at Aix-Marseille/Avignon , France

 

2 PhD grants and 2 postdoc positions (2-year)

at Aix-Marseille/Avignon

on Language, Communication and the Brain

The Center of Excellence on Brain and Language (BLRI, www.blri.fr/) and the Institute of Language, Communication and the Brain (ILCB, http://www.ilcb.fr/ ) award :

  • 2 PhD grants (3-year) on any topic that falls within the area of language, communication, brain and modelling.
  • 2 postdoc positions (2-year) on any topic that falls within the area of language, communication, brain and modelling.

The BLRI-ILCB is located in Aix-en-Provence, Avignon and Marseille and regroups several research centers in linguistics, psychology, cognitive neuroscience, medicine, computer science, and mathematics. 

Interested candidates need to find one or more PhD or postdoc supervisors amongst the members of the BRLI-ILCB. Together with the supervisor(s), they would then need to write a 3-year PhD project or a 2-year postdoc project. A priority is given to interdisciplinary co-directions and to projects that involve two different laboratories of the institute.

. PhD grants : Monthly salary: 1 685? (1 368? net) for a period of 3 years

. Postdoc: Monthly salary: ~2000 ? net (depending on experience)

. Deadline: June 17, 2018

HOW TO APPLY

Candidates should first contact potential supervisor(s) among the members of the ILCB/BLRI. A list of potential projects and supervisors that will be given priority for this call can be find here. However, you can also apply to any subject, under the supervision of any ILCB/BLRI member (http://www.blri.fr/members.html.).

When the research project is finalized and approved by the supervisor(s), the application must be sent to nadera.bureau@blri.fr.

Back  Top

6-36(2018-03-22) Research scientist at the University of Trento, Italy
 
At the University of Trento ( Italy ) we are looking for highly motivated researcher to join
our research team and work on Natural Language Understanding and Dialog Modeling and Systems.

The Signals and Interactive Systems Lab at University of Trento attracts researchers from
computational linguistics, computer science, electrical engineering to design and train the
most advanced interactive and conversational systems.

You will join the research team that has been training intelligent machines and evaluating
AI-based systems for more than two decades, collaborating with leading research labs and
successful startups in the world.

You can check a sample of the projects in the area of Natural Language Understanding, Conversational
Systems and Personal Agents ( and more ) at:

http://sisl.disi.unitn.it/demo/ 

The candidates should have strong background, past achievement records in
at least in one of the following areas:

- Natural Language Understanding
- Conversational Modeling and Systems
- Machine Learning


For more info on research and projects visit the lab website
Visit lab website at http://sisl.disi.unitn.it/ 

The official language (research and graduate teaching) of the department is English.

FELLOWSHIP
The research fellowship will depend on experience and in the range of 19367 - 33000 Euros per year.
The position is for one year, renewable.

For more information about cost of living, campus,
please visit the graduate school website at http://ict.unitn.it/

DEADLINES

Immediate openings with start date as early as May 2018.
Open until filled.

REQUIREMENTS

- PhD degree in Computer Science, Computational Linguistics, Machine Learning or
similar or affine disciplines.
- Strong academic record (publications in top conferences and journals)
- Strong programming skills
- Excellent command of oral and written English
- Excellent understanding of experimental design methodology and statistics
- Excellent understanding of natural language processing
- Excellent understanding of machine learning methods
- Experience working on research projects
- Excellent team-work skills
- Supervison of students

HOW TO APPLY

Interested applicants should send their
1) CV
2) At least three reference letters sent to:

Email: sisl-jobs@disi.unitn.it


For more info:

Signals and Interactive Systems Lab: http://sisl.disi.unitn.it/
PhD School                         : http://ict.unitn.it/
Department                         : http://disi.unitn.it/


Information Engineering and Computer Science Department (DISI)

DISI has a strong focus on cross-disciplinarity with professors from different
faculties of the University (Physical Science, Electrical Engineering, Economics,
Social Science, Cognitive Science, Computer Science) with international
background. DISI aims at exploiting the complementary experiences present in the
various research areas in order to develop innovative methods, technologies and
applications.

University of Trento

The University of Trento is consistently ranked as premiere Italian university institution.
See http://www.unitn.it/en/node/1636/mid/2573

University of Trento is an equal opportunity employer.
Back  Top

6-37(2018-04-09) Postes d'ATER en Traitement automatique des langues et de la Parole, Sorbonne Université, Paris, France

Des postes d'ATER en Traitement automatique des langues et de la Parole sont disponibles à la faculté des lettres de Sorbonne Université. Le lien pour postuler est http://concours.univ-paris4.fr/PostesAter?entiteBean=posteCandidatureCourant

Les conditions pour candidater sont disponibles sur http://lettres.sorbonne-universite.fr/ater.

Cordialement,

Claude Montacié
claude.montacie@sorbonne-universite.fr

Back  Top

6-38(2018-04-11) A three-year doctoral position at the University Sorbonne Nouvelle, Paris, France

Dear colleagues,
Please find attached the description of a three-years doctoral position at the University Sorbonne Nouvelle to be filled at the last term of 2018.

The Laboratory of Phonetics and Phonology (http://lpp.in2p3.fr/), Paris, France, offers a funded position for a PhD candidate for a period of three years on the acoustic phonetic markers of inter and intra-speaker variability with a special notice considering the normalization of procedures.

We would be most grateful if you could also distribute this information among other persons who may be interested by this offer.


Cédric Gendrot et Cécile Fougeron

 

Descriptif de l?offre :

 

Offre de contrat doctoral par le Laboratoire de Phonétique et Phonologie : « Marqueurs phonétiques et acoustiques de la variabilité inter- et intra-individuelle »      

 

Le Laboratoire de Phonétique et Phonologie propose un contrat doctoral de 3 ans financé par l?ANR pour la rentrée universitaire 2018.

Le thème du doctorat proposé ici a pour objectif d'analyser les marqueurs phonétiques et acoustiques de la variabilité inter et intra locuteurs. Une attention particulière sera portée à la standardisation des méthodes d?analyse proposées, permettant leur transposition dans des domaines d?application connexes, dont celui du traitement automatique de la parole.

 

Il s?agira de prendre en compte des caractéristiques de la voix/parole très liées au contexte de la comparaison de voix. Dans la mesure où les variations de la parole sont multifactorielles, il apparaît indispensable d?établir des standards de mesures objectives pour lesquelles les méthodologies récentes de la phonétique expérimentale peuvent apporter une garantie.

On s?intéressera notamment aux marqueurs acoustiques qui retranscrivent des propriétés physiologiques individuelles ainsi qu?aux habitudes articulatoires, vecteurs d?identité sociale.

  

Le/la doctorant(e) effectuera ses recherches au LPP (Laboratoire de Phonétique et de Phonologie), une unité de recherche mixte CNRS/Université Paris3 Sorbonne Paris Cité. Voir les travaux sur ce thème du Laboratoire de Phonétique et de Phonologie http://lpp.in2p3.fr

Le/la candidat(e) sélectionné(e) sera encadré(e) par Cédric Gendrot et Cécile Fougeron, respectivement enseignant-chercheur de l?Université Sorbonne Nouvelle et Directrice de recherche au CNRS. Il/elle dépendra de l'Ecole Doctorale ED268 de l'Université Sorbonne nouvelle.

Le/la doctorant(e) bénéficiera des ressources du laboratoire, de l'Ecole Doctorale ED268  et de l'environnement de recherche interdisciplinaire du Laboratoire d'Excellence EFL. Il/elle pourra assister à des séminaires hebdomadaires de recherche phonétique et phonologie au LPP et d'autres équipes de recherche, suivre des conférences données par des professeurs invités de stature internationale, des formations, des colloques et des écoles d'été.

 

? Conditions

 - avoir une bonne maitrise de la langue française.

- avoir mené avec succès un premier projet de recherche personnel

- aucune condition de nationalité n'est exigée.

- avoir de très bonnes connaissances en traitement de données de type phonétique acoustique.

- des connaissances en informatique et en analyse statistique seraient un plus. 

 

? Pièces à joindre pour la candidature

1.         un CV

2.         une lettre de motivation

3.         le mémoire de master 2 en phonétique

4.         le nom de deux référents (avec leur adresse courriel)

 

Date limite de candidature: 30 juin 2018

 

 

Les dossiers complets seront à envoyer par mail au plus tard le 30 juin 2018 à Cédric Gendrot (cgendrot@univ-paris3.fr) et Cécile Fougeron (cecile.fougeron@univ-paris3.fr)

 

  • Présélection sur dossier et Audition des candidats présélectionnés

Les candidats présélectionnés seront auditionnés entre le 2 et le 6 juillet 2018) sur place ou par visio-conférence.  

Contact pour plus d?information : 

Cédric Gendrot : cgendrot@univ-paris3.fr

Cécile Fougeron : cecile.fougeron@univ-paris3.fr

 

 

Back  Top

6-39(2018-04-12) Post-doc en criminalistique, LNE, Trappes, France

POST DOC 18 mois - Comparaison de voix dans le domaine criminalistique : définition d’une méthodologie et d’un référentiel pour la certification de laboratoires

Localisation : Trappes (78). Laboratoire national de métrologie et d'essais (LNE)
REF : ML/VOX/DE

CONTEXTE :
Le projet ANR VoxCrim (2017-2021) propose d’objectiver scientifiquement les possibilités de mise en œuvre d’une comparaison de voix dans le domaine criminalistique. Deux objectifs principaux : a) mettre en place une méthodologie d’accréditation de type ISO 17025 pour les laboratoires de la Police, b) établir des standards de mesures objectives. Ce projet permettra de faciliter le traitement d’une comparaison de voix dans les services de police et permettra de renforcer la recevabilité de la preuve auprès des tribunaux.
Le sujet du post-doctorat s’intègre dans le sous-projet « Accréditation, certification, normalisation et métrologie » du projet VoxCrim.
Ce sous-projet s'appuie sur l’existant disponible auprès de l’Association Française de Normalisation (AFNOR) et du Comité Français d’Accréditation (COFRAC).
Le travail à réaliser consiste dans un premier temps à évaluer l’existant et les adaptations nécessaires au contexte de l’accréditation des laboratoires réalisant des comparaisons de voix et à développer les protocoles de métrologie correspondants. Le sous-projet vise, en fin de projet, la définition complète d’une solution pratique d’accréditation et de certification en comparaison de voix.

MISSIONS :
Les missions confiées s’organisent en trois tâches :
-        Rapport sur l’existant. Cette tâche consiste à explorer l’existant pour identifier les normes et
directives à respecter, à faire évoluer ou dont il faut s’inspirer autant que possible. Le travail intégrera une dimension européenne et internationale (travaux du NIST-OSAC par exemple), et s’appuiera principalement sur les normes ISO 17025, 17043, 13528 pour mettre en place l’écosystème nécessaire pour valider les méthodes de comparaison de voix. Ces normes étant relatives principalement à de la mesure physique, le (la) post-doctorant(e) étudiera également la norme ISO 15189 qui présente des exigences relatives à des laboratoires où le prélèvement est fait sur un humain.
-        Spécifications des protocoles de métrologie intra- et inter-laboratoires, adaptées au contexte de la comparaison de voix, et plus spécifiquement dans le domaine de la criminalistique.
-        Le (la) post-doctorant(e) vérifiera l’adéquation des protocoles identifiés avec les jeux de conditions de mise en œuvre de comparaison de voix développés par les autres membres du projet.
Outre le soutien apporté par les équipes Evaluation des systèmes de traitement de l’information et Mathématiques-Statistiques, le (la) post-doctorant(e) bénéficiera de formations :
-        En début de contrat, une journée de formation sur les méthodes de comparaison inter-laboratoire et d’accréditation, dispensée par le LNE aux membres du consortium VoxCrim.
-        Courts stages pratiques à la SDPTS (Sous-Direction de la Police Scientifique et Technique à Ecully) et/ou à l’IRCGN (Institut de Recherche Criminalistique de la Gendarmerie Nationale) afin de comprendre les problématiques liées à la comparaison de voix en criminalistique.
-        Participation aux journées d’étude Voxcrim organisées par les membres du consortium à la SDPTS.
Des publications (et présentations, le cas échéant) en conférences et journaux internationaux sont attendues du (de la) post-doctorant(e).

DUREE :
18 mois. Début de préférence en septembre 2018.

PROFIL :
Vous êtes titulaire d’un doctorat en informatique ou en sciences du langage, avec une spécialisation en traitement automatique de la parole.
Vous possédez des connaissances en méthodologie d’évaluation et en biométrique vocale.
Des connaissances en normalisation seraient un véritable atout.

Pour candidater, merci d’envoyer votre CV à l’adresse recrut@lne.fr en rappelant la référence : ML/VOX/DE

Back  Top

6-40(2018-04-14)2 PhD positions, IRIT Toulouse France

Two PhD positions are still available at IRIT Toulouse France starting
ideally in Sept. 2018.

Position 1: Deep learning approaches to assess head and neck cancer
voice intelligibility

Position 2: Clinical relevance of the intelligibility measures

These positions are in the framework of the TAPAS European Project.

For official information and applications, see
https://www.tapas-etn-eu.org/positions

You may obtain further information from Julie Mauclair (phone: +33 5 61
55 60 55, julie.mauclair@irit.fr) and Thomas Pellegrini (phone: +33 5 61
55 68 86, thomas.pellegrini@irit.fr)

Back  Top

6-41(2018-04-16)Post doc position at INRIA Nancy France
Pos Doctoral Position (12 months)
 
Natural language processing: automatic speech recognition system using deep neural networks without out-of-vocabulary words
 
_______________________________________

- Location:INRIA Nancy Grand Est research center, France

 

- Research theme: PERCEPTION, COGNITION, INTERACTION

 

- Project-team: Multispeech

 

- Scientific Context:

 

More and more audio/video appear on Internet each day. About 300 hours of multimedia are uploaded per minute. In these multimedia sources, audio data represents a very important part. If these documents are not transcribed, automatic content retrieval is difficult or impossible. The classical approach for spoken content retrieval from audio documents is an automatic speech recognition followed by text retrieval.

 

An automatic speech recognition system (ASR) uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. New Proper Names (PNs) appear constantly, requiring dynamic updates of the lexicons used by the ASR. These PNs evolve over time and no vocabulary will ever contains all existing PNs. When a person searches for a document, proper names are used in the query. If these PNs have not been recognized, the document cannot be found. These missing PNs can be very important for the understanding of the document.

 

In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is how to model relevant proper names for the audio document we want to transcribe.

 

- Missions:

 

We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs). The purpose of this work is to design a methodology how to find and model a list of relevant OOV PNs that correspond to an audio document.

 

Assuming that we have an approximate transcription of the audio document and huge text corpus extracted from internet, several methodologies could be studied:

  • From the approximate OOV pronunciation in the transcription, generate the possible writings of the word (phoneme to character conversion) and search this word in the text corpus.

  • A deep neural network can be designed to predict OOV proper names and their pronunciations with the training objective to maximize the retrieval of relevant OOV proper names.

 

The proposed approaches will be validated using the ASR developed in our team.

 

Keywords: deep neural networks, automatic speech recognition, lexicon, out-of-vocabulary words.

 

- Bibliography

[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.

[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.

[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.

[Li2017] J. Li, G. Ye, R. Zhao, J. Droppo, Y. Gong , ?Acoustic-to-Word Model without OOV?, ASRU, 2017.

 

 

- Skills and profile: PhD in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python).

- Additional information:

 

Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/

 

Additional links : Ecole Doctorale IAEM Lorraine

 

Deadline to apply: Mai 20th

Selection results: end of June

 

Duration :12 of months.

Starting date: between Nov. 1st 2018 and Jan. 1st 2019
Salary: about 2.115 euros net, medical insurance included

 

The candidates must have defended their PhD later than Sept. 1st 2016 and before the end of 2018. 

The candidates are required to provide the following documents in a single pdf or ZIP file: 

  • CV including a description of your research activities (2 pages max) and a short description of what you consider to be your best contributions and why (1 page max and 3 contributions max); the contributions could be theoretical or  practical. Web links to the contributions should be provided. Include also a brief description of your scientific and career projects, and your scientific positioning regarding the proposed subject.

  • The report(s) from your PhD external reviewer(s), if applicable.

  • If you haven't defended yet, the list of expected members of your PhD committee (if known) and the expected date of defence.

In addition, at least one recommendation letter from the PhD advisor should be sent directly by their author(s) to the prospective postdoc advisor.

 

Help and benefits:

 

  • Possibility of free French courses

  • Help for finding housing

  • Help for the resident card procedure and for husband/wife visa

Back  Top

6-42(2018-04-16) PhD grant, INRIA Nancy France
 
Natural language processing: adding new words to a speech recognition system using Deep Neural Networks
 
 
- Location: INRIA/LORIA Nancy Grand Est research center France
- Project-team: Multispeech
- Scientific Context:

Voice is seen as the next big field for computer interaction. The research company Gartner reckons that by 2018, 30% of all interactions with devices will be voice-based: people can speak up to four times faster than they can type, and the technology behind voice interaction is improving all the time.

As of October 2017, Amazon Echo is present in about 4% of American households. Voice assistants are proliferating in smartphones too: Apple?s Siri handles over 2 billion commands a week, and 20% of Google searches on Android-powered handsets in America are done by voice input.

The proper nouns (PNs) play a particular role: they are often important to understand a message and can vary enormously. For example, a voice assistant should know the names of all your friends; a search engine should know the names of all famous people and places, names of museums, etc.

An automatic speech recognition system uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. It is impossible to add all possible proper names because there are millions proper names and new ones appear every day. A competitive solution is to dynamically add new PNs into the ASR system. The idea is to add only relevant proper names: for instance if we want to transcribe a video document about football results, we should add the names of famous football players and not politicians.

In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is to find relevant proper names for the audio document we want to transcribe. To select the relevant proper names, we propose to use an artificial neural network.

- Missions:

We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs)

Tgoal of this PhDThesis is to find a list of relevant OOV PNs that correspond to an audio document and to integrate them in the speech recognition system. We will use a Deep neural network to find relevant OOV PNs The input of the DNN will be the approximate transcription of the audio document and the output will be the list of relevant OOV PNs with their probabilities. The retrieved proper names will be added to the lexicon and a new recognition of the audio document will be performed.

During the thesis, the student will investigate methodologies based on deep neural networks [Deng2013]. The candidate will study different structures of DNN and different representation of documents [Mikolov2013]. The student will validate the proposed approaches using the automatic transcription system of radio broadcast developed in our team.

- Bibliography:

 

[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.

 

[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.

 

[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.

- Skills and profile: Master in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python).

- Additional information:

 

Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/

Additional links: Ecole Doctorale IAEM Lorraine

 

Duration: 3 years

Starting date: between Oct. 1st 2018 and Jan. 1st 2019

Deadline to apply : May 1st 2018

 

The candidates are required to provide the following documents in a single pdf or ZIP file: 

  • CV

  • A cover/motivation letter describing their interest in the topic 

  • Degree certificates and transcripts for Bachelor and Master (or the last 5 years)

  • Master thesis (or equivalent) if it is already completed, or a description of the work in progress, otherwise

  • The publications (or web links) of the candidate, if any (it is not expected that they have any)

In addition, one recommendation letter from the person who supervises(d) the Master thesis (or research project or internship) should be sent directly by his/her author to the prospective PhD advisor.

Back  Top

6-43(2018-04-17) PhD at LORIA Nancy France

Impact LUE Open Language and Knowledge for Citizens ? OLKi
Application for a PhD grant 2018 co-supervised by the Crem and the Loria
?Online hate speech against migrants?

 

Deadline to apply : May 1st 2018

 

According to the 2017 International Migration Report, the number of international migrants worldwide has continued to grow rapidly in recent years, reaching 258 million in 2017, up from 220
million in 2010 and 173 million in 2000. In 2017, 64 per cent of all international migrants worldwide ?
equal to 165 million international migrants ? lived in high-income countries; 78 million of them were
residing in Europe. Since 2000, Germany and France figure among the countries hosting the largest
number of international migrants. A key reason for the difficulty of EU leaders to take a decisive and
coherent approach to the refugee crisis has been the high levels of public anxiety about immigration
and asylum across Europe. Indeed, across the EU, attitudes towards asylum and immigration have
hardened in recent years because of (Berri et al., 2015): (i) the increase in the number and visibility of
migrants in recent years, (ii) the economic crisis and austerity policies enacted since the 2008 Global
Financial Crisis, (iii) the role of the mass media in influencing public and elite political attitudes towards
asylum and migration. Refugees and migrants tend to be framed negatively as a problem, potentially
nourishing.

Indeed, the BRICkS ? Building Respect on the Internet by Combating Hate Speech ? EU project1
has revealed a significant increase of the use of hate speech towards immigrants and minorities, which
are often blamed to be the cause of current economic and social problems. The participatory web and
the social media seem to accelerate this tendency, accentuated by the online rapid spread of fake news
which often corroborate online violence towards migrants. Based on existing research, Carla Schieb and
Mike Preuss (2016) highlight that hate speech deepens prejudice and stereotypes in a society (Citron &
Norton, 2011). It also has a detrimental effect on mental health and emotional well-being of targeted
groups, especially on targeted individuals (Festl & Quandt, 2013) and is a source of harm in general for
those under attack (Waldron, 2012), when culminating in violent acts incited by hateful speech. Such
violent hate crimes may erupt in the aftermath of certain key events, e.g. anti-Muslim hate crimes in
response to the 9/11 terrorist attacks (King & Sutton, 2013).

Hate speech and fake news are not, of course, just problems of our times. Hate speech has always
been part of antisocial behavior such as bullying or stalking (Delgado & Stefancic, 2014); ?trapped?,
emotional, unverified and/or biased contents have always existed (Dauphin, 2002; Froissart, 2002, 2004;
Lebre, 2014) and need to be understood on an anthropological level as reflections of people?s fears,
anxieties or fantasies. They reveal what Marc Angenot calls a certain ?state of society? (Angenot, 1978;
1989; 2006). Indeed, according to this author, analysis of situated specific discourses sheds light to some
of the topoi ? common premises and patterns ? that characterize public doxa. This ?gnoseological?
perspective reveals the ways the visions of the ?world? can be systematically schematized on linguistic
materials at a certain moment.

Within this context and problematic, the PhD project jointly proposed by the Crem and the Loria
aims to analyse hate speech towards migrants in social media and more particularly on Twitter.
It seeks to provide answers to the following questions:
? What are the representations of migrants as they emerge in hate speech on Twitter?
? What themes are they associated with?
? What can the latter tell us about the ?state? of our society, in the sense previously given to this
term by Marc Angenot?

Secondary questions will also be addressed as to refine the main results:
1 http://www.bricks-project.eu/wp/about-the-project/
? What is the origin of these messages? (individual accounts, political party accounts, bots, etc.)
? What is the circulation of these messages? (reactions, retweets, interactions, etc.)
? Can we measure the emotional dimension of these messages? Based on which indicators?
? Can a scale be established to measure the intensity of hate in speech?
More and more audio/video/text appear on Internet each day. About 300 hours of multimedia are
uploaded per minute. In these multimedia sources, manual content retrieval is difficult or impossible.
The classical approach for spoken content retrieval from multimedia documents is an automatic text
retrieval. Automatic text classification is one of the widely used technologies for the above purposes.
In text classification, text documents are usually represented in some so-called vector space and then
assigned to predefined classes through supervised machine learning. Each document is represented as a
numerical vector, which is computed from the words of the document. How to numerically represent
the terms in an appropriate way is a basic problem in text classification tasks and directly affects the
classification accuracy. Sometimes, in text classification, the classes cannot be defined in advance. In
this case, unsupervised machine learning is used and the challenge consists in finding underlying
structures from unlabeled data. We will use methodologies to perform one of the important tasks of text
classification: automatic hate speech detection.

Developments in Neural Network (Mikolov et al., 2013a) led to a renewed interest in the field of
distributional semantics, more specifically in learning word embeddings (representation of words in a
continuous space). Computational efficiency was one big factor which popularized word embeddings.
The word embeddings capture syntactic as well as semantic properties of the words (Mikolov et al.,
2013b). As a result, they outperformed several other word vector representations on different tasks
(Baroni et al., 2014).

Our methodology in the hate speech classification will be related on the recent approaches for text
classification with neural networks and word embeddings. In this context, fully connected feed forward
networks (Iyyer et al., 2015; Nam et al., 2014), Convolutional Neural Networks (CNN) (Kim, 2014;
Johnson and Zhang, 2015) and also Recurrent/Recursive Neural Networks (RNN) (Dong et al., 2014)
have been applied. On the one hand, the approaches based on CNN and RNN capture rich compositional
information, and have outperformed the state-of-the-art results in text classification; on the other hand
they are computationally intensive and require careful hyperparameter selection and/or regularization
(Dai and Le, 2015).

This thesis aims at proposing concepts, analysis and software components (Hate Speech Domain
Specific Analysis and related software tools in connection with migrants in social media) to bridge the
gap between conceptual requirements and multi-source information from social media. Automatic hate
speech detection software will be experimented in the modeling of various hate speech phenomenon and
assess their domain relevance with both partners.
The language of the analysed messages will be primarily French, although links with other languages
(including messages written in English) may appear throughout the analysis.
This PhD project complies with the Impact OLKi (Open Language and Knowledge for Citizens)
framework because:
? It is centred on language.
? It aims to implement new methods to study and extract knowledge from linguistic data
(indicators, scales of measurement).
? It opens perspectives to produce technical solutions (applications, etc.) for citizens and digital
platforms, to better control the potential negative use of language data.
Scientific challenges:
? to study and extract knowledge from linguistic data that concern hate speech towards migrants in
social media;
? to better understand hate speech as a social phenomenon, based on the data extracted and analysed;
? to propose and assess new methods based on Deep Learning for automatic detection of documents
containing hate speech. This will allow to set up a hate speech online management protocol.

Keywords: hate speech, migrants, social media, natural language processing.
Doctoral school: Computer Science (IAEM)
Principal supervisor: Irina Illina, Assistant Professor in Computer Science, irina.illina@loria.fr
Co-supervisors: Crem Loria
Angeliki Monnier, Professor Information-Communication, angeliki.monnier@univ-lorraine.fr
Dominique Fohr, Research scientist CNRS, dominique.fohr@loria.fr

References
Angenot M (1978) Fonctions narratives et maximes idéologiques. Orbis Litterarum 33: 95-100.
Angenot M (1989) 1889 : un état du discours social. Montréal : Préambule.
Angenot M (2006) Théorie du discours social. Notions de topographie des discours et de coupures cognitives,
COnTEXTES. thttps://contextes.revues.org/51.
Baroni, M., Dinu, G., and Kruszewski, G. (2014). ?Don?t count, predict! a systematic comparison of contextcounting
vs. contextpredicting semantic vectors?. In Proceedings of the 52nd Annual Meeting of the
Association for Computational Linguistics, Volume 1, pages 238-247.
Berri M, Garcia-Blanco I, Moore K (2015), Press coverage of the Refugee and Migrant Crisis in the EU: A Content
Analysis of five European Countries, Report prepared for the United Nations High Commission for Refugees,
Cardiff School of Journalism, Media and Cultural Studies.
Chouliaraki L, Georgiou M and Zaborowski R (2017), The European ?migration crisis? and the media: A cross-
European press content analysis. The London School of Economics and Political Science, London, UK.
Citron, D. K., Norton, H. L. (2011), ?Intermediaries and hate speech: Fostering digital citizenship for our
information age?, Boston University Law Review, 91, 1435.
Dai, A. M. and Le, Q. V. (2015). ?Semi-supervised sequence Learning?. In Cortes, C., Lawrence, N. D., Lee, D.
D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages
3061-3069. Curran Associates, Inc
Dauphin F (2002), Rumeurs électroniques : synergie entre technologie et archaïsme. Sociétés 76 : 71-87.
Delgado R., Stefancic J. (2014), ?Hate speech in cyberspace?, Wake Forest Law Review, 49.
Dong, L., Wei, F., Tan, C., Tang, D., Zhou, M., and Xu, K. (2014). ?Adaptive recursive neural network for targetdependent
twitter sentiment classification?. In Proceedings of the 52nd Annual Meeting of the Association for
Computational Linguistics, ACL, Baltimore, MD, USA, Volume 2: pages 49-54.
Festl R., Quandt T (2013), Social relations and cyberbullying: The influence of individual and structural attributes
on victimization and perpetration via the internet, Human Communication Research, 39(1), 101?126.
Froissart P (2002) Les images rumorales, une nouvelle imagerie populaire sur Internet. Bry-Sur-Marne : INA.
Froissart P (2004) Des images rumorales en captivité : émergence d?une nouvelle catégorie de rumeur sur les sites
de référence sur Internet. Protée 32(3) : 47-55.
Johnson, R. and Zhang, T. (2015). ?Effective use of word order for text categorization with convolutional neural
networks?. In Proceedings of the 2015 Conference of the North American Chapter of the Association for
Computational Linguistics: Human Language Technologies, pages 103-112.
Iyyer, M., Manjunatha, V., Boyd-Graber, J., and Daumé, H. (2015). ?Deep unordered composition rivals syntactic
methods for text classification?. In Proceedings of the 53rd Annual Meeting of the Association for
Computational Linguistics, volume 1, pages 1681-1691.
Kim, Y. (2014). ?Convolutional neural networks for sentence classification?. In Proceedings of the Conference on
Empirical Methods in Natural Language Processing (EMNLP), pages 1746-1751.
King R. D., Sutton G. M. (2013). High times for hate crimes: Explaining the temporal clustering of hate-motivated
offending. Criminology, 51 (4), 871?894.
Lebre J (2014) Des idées partout : à propos du partage des hoaxes entre droite et extrême droite. Lignes 45: 153-
162.
Mikolov, T., Yih, W.-t., and Zweig, G. (2013a). ?Linguistic regularities in continuous space word representations?.
In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational
Linguistics: Human Language Technologies, pages 746-751.
Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. (2013b). ?Distributed representations of words
and phrases and their Compositionality?. In Advances in Neural Information Processing Systems, 26, pages
3111-3119. Curran Associates, Inc.
Nam, J., Kim, J., Loza Menc__a, E., Gurevych, I., and F urnkranz, J. (2014). ?Large-scale multi-label text
classification ? revisiting neural networks?. In Proceedings of the European Conference on Machine Learning
and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD-14), Part 2, volume 8725,
pages 437-452.
Schieb C, Preuss M (2016), Governing Hate Speech by Means of Counter Speech on Facebook, 66th ICA Annual
Conference, Fukuoka, Japan.
United Nations (2018), International Migration Report 2017. Highlights, New York, Department of Economic
and Social Affairs.
Waldron J. (2012), The harm in hate speech, Harvard University Press.

Back  Top

6-44(2018-04-17) PhD grant at Loria, Nancy France

Thesis title Expressive speech synthesis based on deep learning
 
Location: INRIA Nancy Grand Est research center --- LORIA Laboratory, Nancy, France Research theme: Perception, Cognition, Interaction,  Project-team: MULTISPEECH (https://team.inria.fr/multispeech/)
Scientific Context Over the last decades, text-to-speech synthesis (TTS) has reached good quality and intelligibility, and is now commonly used in information delivery services, as for example in call center automation, in navigation systems, and in voice assistants. In the past, the main goal when developing TTS systems was to achieve high intelligibility. The speech style was then typically a “reading style”, which resulted from the style of the speech data used to develop TTS systems (reading of a large set of sentences). Although a reading style is acceptable for occasional interactions, TTS systems should benefit from more variability and expressivity in the generated synthetic speech, for example, for lengthy interactions between machines and humans, or for entertainment applications. This is the goal of recent or emerging research on expressive speech synthesis. Contrary to neutral speech, which is typically read speech without conveying any particular emotion, expressive speech can be defined as speech carrying an emotion, or spoken as in spontaneous speech, or also as speech with emphasis set on some words. 
Missions: (objectives, approach, etc.)  Deep learning approaches leads to good speech synthesis quality, however the main scientific and technological barrier remains the necessity of having a speech corpora corresponding to the speaker and the target style conditions, here expressive speech. This thesis aims at investigating approaches to overcome this barrier. More precisely, the objective is to propose and investigate approaches allowing expressive speech synthesis for a given speaker voice, using both the neutral speech data of that speaker, or the corresponding neutral speech model, and expressive speech data from other speakers. This will avoid lengthy and costly recording of specific ad hoc expressive speech corpora (e.g., emotional speech data from the target voice speaker). Let recall that three main steps are involved in parametric speech synthesis: the generation of sequences of basic units (phonemes, pauses, etc.) from the source text; the generation of prosody parameters (durations of sounds, pitch values, etc.); and finally the generation of acoustic parameters, which leads to the synthetic speech signal. All the levels are involved in expressive speech synthesis: alteration of pronunciations and presence of pauses, modification of prosody correlates and modification of the spectral characteristics. The thesis will essentially focus on the two last points, i.e., a correct prediction of prosody and spectral characteristics to produced expressive speech through deep learning-based approaches. Some aspects to be investigated include the combined used of only the neutral speech data of the target voice speaker and expressive speech of other speakers in the training process, or in an adaptation process, as well as data augmentation processes. The baseline experiments will rely on neutral speech corpora and expressive speech corpora previously collected for speech synthesis in the Multispeech team. Further experiments will consider using other expressive speech data, possibly extracted from audiobooks.
Skills and profile:  Master in automatic language processing or in computer science Background in statistics, and in deep learning Experience with deep learning tools
Good computer skills (preferably in Python) Experience in speech synthesis is a plus
 
Bibliography: (if any)  [Sch01] M. Schröder. Emotional speech synthesis: A review. Proc. EUROSPEECH, 2001. [Sch09] M. Schröder. Expressive speech synthesis: Past, present, and possible futures. Affective information processing, pp. 111–126, 2009. [ICHY03] A. Iida, N. Campbell, F. Higuchi and M. Yasumura. A corpus-based speech synthesis system with emotion. Speech Communication, vol. 40, n. 1, pp. 161–187, 2003. [PBE+06] J.F. Pitrelli, R. Bakis, E.M. Eide, R. Fernandez, W. Hamza and M.A. Picheny. The IBM expressive text-to-speech synthesis system for American English. IEEE Transactions on Audio, Speech, and Language Processing, vol. 14, n. 4, pp. 1099–1108, 2006. [JZSC05] D. Jiang,W. Zhang, L. Shen and L. Cai. Prosody analysis and modeling for emotional speech synthesis. Proc. ICASSP, 2005. [WSV+15] Z. Wu, P. Swietojanski, C. Veaux, S. Renals, S. King. A study of speaker adaptation for DNN-based speech synthesis. Proc. INTERSPEECH, pp. 879–883, 2015.
Additional information: Supervision and contact: Denis Jouvet (denis.jouvet@loria.fr; https://members.loria.fr/DJouvet/) Vincent Colotte (Vincent.colotte@loria.fr; https://members.loria.fr/VColotte/) Additional link: Ecole Doctorale IAEM Lorraine (http://iaem.univ-lorraine.fr/) Duration: 3 years Starting date: autumn 2018
Deadline to apply: May 1st, 2018
The candidates are required to provide the following documents in a single pdf or ZIP file:   CV  A cover/motivation letter describing their interest in the topic   Degree certificates and transcripts for Bachelor and Master (or the last 5 years)  Master thesis (or equivalent) if it is already completed, or a description of the work in progress, otherwise  The publications (or web links) of the candidate, if any (it is not expected that they have any)  In addition, one recommendation letter from the person who supervises(d) the Master thesis (or research project or internship) should be sent directly by his/her author to the prospective PhD advisor.

Back  Top

6-45(2018-04-19) PhD at LeMans University, France

Title of the PhD thesis:

 

Automatic speech processing in meetings

using microphone array

 

Key words : environment with reverberation– Array & Beamforming – Signal processing – Deep learning – Transcription and speaker recognition

 

 

Supervision : Silvio Montrésor (LAUM), Anthony Larcher (LIUM), Jean-Hugh Thomas (LAUM)

 

Funding: LMAC (Scientific bets of Le Mans Acoustique)

 

Beginning : September 2018

 

Contact : jean-hugh.thomas@univ-lemans.fr

 

Aim of the PhD thesis

The subject is supported by two laboratories of Le Mans – Université: the acoustics lab (LAUM) and the computer science lab (LIUM). The aim is to enhance automatic speech processing in meetings, transcription and speaker recognition, by using a recording device and audio signal processing from a microphone array.

 

 

Subject of the PhD thesis

It consists in implementing a hands-free system able to localise the speakers in a room, to separate the signals emitted by these speakers and to enhance the speech signal and its processing.

           

The thesis’ issues are the following:

 

-       Define an array geometry adapted to distant sound recording with few microphones.

 

-       Propose processing able to take advantage of the acoustic data provided by the array and to select the parts of the audio signals (reflexion orders) the most relevant for enhancing the performance of the automatic speech recognition system of the LIUM. The process should take into account the confined environment (meeting room). It will also use source separation algorithms to identify the different speakers during the meeting.

 

-       Propose new development to the usual methods to extract features from the signal to enhance the relevance for the neural network.

 

-       Propose a learning strategy for the neural network to enhance the transcription performance.

 

Some références

[1] J. H. L. Hansen, T. Hasan, Speaker recognition by machines and humans, IEEE Signal Processing Magazine, 74, 2015.

 

[2] L. Deng, G. Hinton, B. Kingsbury, New types of deep neural network learning for speech recognition and related applications: An overview, 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (pp. 8599-8603).

 

[3] G. Hinton, L. Deng, D. Yu, G. E. Dahl, A.-R. Mhamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T. N. Sainath, B. Kingsbury, Deep neural networks for acoustic modelling in speech recognition, IEEE Signal Processing Magazine, 82, 2012.

[4] P Bell, MJF Gales, T Hain, J Kilgour, P Lanchantin, X Liu, A McParland, S Renals, O Saz, M Wester, et al.The MGB challenge : Evaluating multi-genre broadcast media recognition. Proc. of ASRU, Arizona, USA, 2015.

 

[5] T. B. Spalt, Background noise reduction in wind tunnels using adaptive noise cancellation and cepstral echo removal techniques for microphone array applications, Master of Science in Mechanical Engineering, Hampton, Virginia, USA, 2010.

 

[6] D. Blacodon, J. Bulté, Reverberation cancellation in a closed test section of a wind tunnel using a multi-microphone cepstral method, Journal of Sound and Vibration 333, 2669-2687 (2014).

 

[7] Q.-G. Liu, B. Champagne, P. Kabal, A microphone array processing technique for speech enhancement in a reverberant space, Speech Communication 18 (1996) 317-334.

 

[8] S. Doclo, Multi-microphone noise reduction and de-reverberation techniques for speech applications, S. Doclo, Thesis, Leuven (Belgium), 2003.

 

[9] Y. Liu, N. Nower, S. Morita, M. Unoki, Speech enhancement of instantaneous amplitude and phase for applications in noisy reverberant environments, Speech Communication 84 (2016) 1-14.

 

[10] Feng, X., Zhang, Y., & Glass, J. (2014, May). Speech feature denoising and dereverberation via deep autoencoders for noisy reverberant speech recognition. In 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 1759-1763). IEEE.

[11] Kinoshita, K., Delcroix, M., Yoshioka, T., Nakatani, T., Sehr, A., Kellermann, W., & Maas, R. (2013, October). The reverb challenge: Acommon evaluation framework for dereverberation and recognition of reverberant speech. In 2013 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (pp. 1-4). IEEE.

 

[12] Xiong X., Watanabe S., Erdogan H., Lu L., Hershey J., Seltzer M. L., Chen G., Zhang Y., Mandel M., Yu D., Deep Beamforming Networks for Multi-Channel Speech Recognition, Proceedings of ICASSP 2016, pp 5745-5749.

Back  Top

6-46(2018-04-15) PhD Project Australia-France


PhD Project – Call for Applications Situated Learning for Collaboration across Language Barriers

People working in development are often deployed to remote locations where they work alongside locals who speak an unwritten minority language. Outsiders and locals share knowhow and pick up phrases in each other’s languages. They are performing a type of situated learning of language and culture. This situation is found across the world, in developing countries, border zones, and in indigenous communities. This project will develop computational tools to help people work together across language barriers. The research will be evaluated in terms of the the quality of the social interaction, the mutual acquisition of language and culture, the effectiveness of cross-lingual collaboration, and the quantity of translated speech data collected. The ultimate goal is to contribute to the grand task of documenting world’s languages. The project will involve working between France and Australia, and will include fieldwork with a remote indigenous community. We’re looking for outstanding and highly motivated candidates to work on a PhD on this subject. Competencies in two or more of the following areas are mandatory:

• machine learning for natural language processing;

• speech processing for interactive systems;

• participatory design;

• mobile software development;

• documenting and describing unwritten languages.

The project will build on previous work in the following areas: mobile platforms for collecting spoken language data [6, 7]; respeaking as a technique for improving the value of recordings made ‘in the wild’ and an alternative to traditional transcription practices [12, 13]; machine learning of structure in phrase-aligned bilingual speech recordings [2, 3, 4, 8, 9, 10, 11]; participatory design of mobile technologies for working with minority languages [5]; managing multilingual databases of text, speech and images [1]. Some recent indicative PhD theses include: Computer Supported Collaborative Language Documentation (Florian Hanke, 2017); Automatic Understanding of Unwritten Languages (Oliver Adams, 2018); Collecter, Transcrire, Analyser : quand la Machine Assiste le Linguiste dans son Travail de Terrain (Elodie Gauthier, 2018); Enriching Endangered Language Resources using Translations (Antonios Anastasopoulos, in prep); Digital Tool Deployment for Language Documentation (Mat Bettinson, in prep); Bayesian and Neural Modeling for Multi Level and Crosslingual Alignment (Pierre Godard, in prep).
Details of the position. Funding includes remission of university fees, a stipend of approximately e17,500 per year, and a travel allowance. The position starts in Fall 2018 (ie from September) and lasts for three years. The research will be supervised by Steven Bird (Charles Darwin University, Australia) and Laurent Besacier (Univ. Grenoble Alpes, France). Acceptance will be subject to approval by both host institutions (Grenoble and Darwin). Given the cross-cultural nature of the project, the successful candidate will have demonstrated substantial experience of cross-cultural living.


Apply. To apply, please contact laurent.besacier@univ-grenoble-alpes.fr and steven. bird@cdu.edu.au including a cover letter, curriculum vitae, academic transcripts and reference letter by your MSc thesis advisor.


Institutions The University of Grenoble offers an excellent research environment with ample compute hardware to solve hard speech and natural language processing problems, as well as remarkable surroundings to explore over the weekends. Charles Darwin University is a research-intensive university attracting students from over 50 countries. CDU is situated in Australia’s tropical north, in the midst of one of the world’s hot-spots for linguistic diversity and language endangerment. Darwin is a youthful, multicultural, cosmopolitan city in a territory that is steeped in Aboriginal tradition and culture and which enjoys a close interaction with the peoples of Southeast Asia.


References
[1] Steven Abney and Steven Bird. The Human Language Project: building a universal corpus of the world’s languages. In Proceedings of the 48th Meeting of the Association for Computational Linguistics, pages 88–97. ACL, 2010. [2] Oliver Adams, Graham Neubig, Trevor Cohn, and Steven Bird. Learning a translation model from word lattices. In Interspeech 2016, pages 2518–22, 2016. [3] Antonios Anastasopoulos, Sameer Bansal, David Chiang, Sharon Goldwater, and Adam Lopez. Spoken term discovery for language documentation using translations. In Proceedings of the Workshop on Speech-Centric NLP, pages 53–58, 2017. [4] Antonios Anastasopoulos and David Chiang. A case study on using speech-to-translation alignments for language documentation. In Proc. Workshop on Use of Computational Methods in Study of Endangered Languages, pages 170–178, 2017. [5] Steven Bird. Designing mobile applications for endangered languages. In Kenneth Rehg and Lyle Campbell, editors, Oxford Handbook of Endangered Languages. Oxford University Press, 2018. [6] Steven Bird, Florian R. Hanke, Oliver Adams, and Haejoong Lee. Aikuma: A mobile app for collaborative language documentation. In Proceedings of the Workshop on the Use of Computational Methods in the Study of Endangered Languages. ACL, 2014. [7] David Blachon, Elodie Gauthiera, Laurent Besacier, Guy-No¨el Kouaratab, Martine Adda-Decker, and Annie Rialland. Parallel speech collection for under-resourced language studies using the Lig-Aikuma mobile device app. In Proceedings of the Fifth Workshop on Spoken Language Technologies for Under-resourced languages, volume 81, pages 61–66, 2016. [8] V. H. Do, N. F. Chen, B. P. Lim, and M. A. Hasegawa-Johnson. Multitask learning for phone recognition of underresourced languages using mismatched transcription. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26:501–514, 2018. [9] Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. The zero resource speech challenge 2017. In Automatic Speech Recognition and Understanding (ASRU), 2017 IEEE Workshop on. IEEE. [10] Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. An attentional model for speech translation without transcription. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949–959, 2016. [11] Pierre Godard, Gilles Adda, Martine Adda-Decker, Alexandre Allauzen, Laurent Besacier, Helene Bonneau-Maynard, Guy-No¨el Kouarata, Kevin L¨oser, Annie Rialland, and Franc¸ois Yvon. Preliminary experiments on unsupervised word discovery in Mboshi. In Interspeech 2016, 2016. [12] Mark Liberman, Jiahong Yuan, Andreas Stolcke, Wen Wang, and Vikramjit Mitra. Using multiple versions of speech input in phone recognition. In ICASSP, pages 7591–95. IEEE, 2013. [13] Anthony C. Woodbury. Defining documentary linguistics. Language Documentation and Description, 1:35–51, 2003.

Back  Top

6-47(2018-04-19) Joint PhD, Rennes/Dublin

Funded joint PhD between Univ Rennes and DIT, Dublin. The subject is about  « Deep neural natural language style transfer ».

Back  Top

6-48(2018-05-14) Assistant linguist (French)

Assistant Linguist [French]

 

Job Title:

Assistant Linguist [French]

Linguistic Field(s):

Phonetics, Phonology, Morphology, Semantics, Syntax, Lexicography, NLP

Location:

Paris, France

Job description:

The role of the Assistant Linguist is to annotate and review linguistic data in French. The Assistant Linguist will also contribute to a number of other tasks to improve natural language processing. The tasks include:

  • Providing phonetic/phonemic transcription of lexicon entries

  • Analyzing acoustic data to evaluate speech synthesis

  • Annotating and reviewing linguistic data

  • Labeling text for disambiguation, expansion, and text normalization

  • Annotating lexicon entries according to guidelines

  • Evaluating current system outputs

  • Deriving NLP data for new and on-going projects

  • Be able to work independently with confidence and little oversight

Minimum Requirements:

  • Native speaker of French and fluent in English

  • Extensive knowledge of phonetic/phonemic transcriptions

  • Familiarity with TTS tools and techniques

  • Experience in annotation work

  • Knowledge of phonetics, phonology, semantics, syntax, morphology or lexicography

  • Excellent oral and written communication skills

  • Attention to detail and good organizational skills

Desired Skills:

  • Degree in Linguistics or Computational Linguistics or Speech processing

  • Ability to quickly grasp technical concepts; learn in-house tools

  • Keen interest in technology and computer-literate

  • Listening Skills

  • Fast and Accurate Keyboard Typing Skills

  • Familiarity with Transcription Software

  • Editing, Grammar Check and Proofing Skills

  • Research Skills

 

CV + motivation letter in English: maroussia.houimli@adeccooutsourcing.fr

Back  Top

6-49(2018-05-20) Postdoc position in social robotics, Uppsala University, Sweden

** Postdoc position in social robotics**

Uppsala Social Robotics Lab

Department of Information Technology

Uppsala University, Sweden

 

Uppsala University is an international research university focused on the development of science and education. Our most important assets are all the individuals who with their curiosity and their dedication make Uppsala University one of Sweden’s most exciting work places. Uppsala University has 45.000 students, 6,800 employees and a turnover of SEK 6,300 million. The Department of Information Technology (http://www.it.uu.se/first?lang=en) is with approximately 275 employees, including 110 senior faculty and 120 PhD students, and more than 4000 students enrolled annually, one of Uppsala University’s largest departments.

 

The Uppsala Social Robotics Lab (http://hri.research.it.uu.se/) led by Dr. Ginevra Castellano aims to design and develop robots that learn to interact socially with humans and bring benefits to the society we live in, for example in application areas such as education and assistive technology.

 

We are receiving expressions of interest for an upcoming two-year postdoctoral researcher position in social robotics, specifically on the topic of social learning for co-adaptive social human-robot interactions.

 

The PhD student will have the opportunity to work in one or more projects on personalised and co-adaptive human-robot interaction, funded by the Swedish Research Council and the Swedish Foundation for Strategic Research, in collaboration with KTH Stockholm and the University of Gothenburg.

 

The researcher will be part of the Uppsala Social Robotics Lab at the Division of Visual Information and Interaction of the Department of Information Technology.

The Uppsala Social Robotics Lab’s focus is on natural interaction with social artefacts such as robots and embodied virtual agents. This domain concerns bringing together multidisciplinary expertise to address new challenges in the area of social robotics, including mutual human-robot co-adaptation, multimodal multiparty natural interaction with social robots, multimodal human affect and social behavior recognition, multimodal expression generation, robot learning from users, behavior personalization, effects of embodiment (physical robot versus embodied virtual agent) and other fundamental aspects of human-robot interaction (HRI). State of the art robots are used, including the Pepper, Nao and Furhat robotic platforms. The Lab is involved in a number of different national and EU-funded projects in collaborations with international partners.

 

How to send expressions of interest:

To express their interest, candidates should submit a CV, a 1-page research statement and a cover letter (indicating the name of referees and the earliest possible start date) to Ginevra Castellano (ginevra.castellano@it.uu.se) by the 31st of May.

 

Requirements:

Qualifications: The candidates must have a PhD degree in human-robot interaction or related areas relevant to the postdoc topic. Good programming skills and ability to conduct user studies are required. The PhD position is highly interdisciplinary and requires an understanding and/or interest in psychology and social sciences. Experience in machine learning for human-robot interaction is appreciated.

Back  Top

6-50( 2018-05-20) Post Doctoral Position (12 months), INRIA, Nancy, France
 
Post Doctoral Position (12 months)

Natural language processing: automatic speech recognition system using deep neural networks without out-of-vocabulary words

_______________________________________

- Location:INRIA Nancy Grand Est research center, France

 

- Research theme: PERCEPTION, COGNITION, INTERACTION

 

- Project-team: Multispeech

 

Deadline to apply: June 6th


- Scientific Context:

 

More and more audio/video appear on Internet each day. About 300 hours of multimedia are uploaded per minute. In these multimedia sources, audio data represents a very important part. If these documents are not transcribed, automatic content retrieval is difficult or impossible. The classical approach for spoken content retrieval from audio documents is an automatic speech recognition followed by text retrieval.

 

An automatic speech recognition system (ASR) uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. New Proper Names (PNs) appear constantly, requiring dynamic updates of the lexicons used by the ASR. These PNs evolve over time and no vocabulary will ever contains all existing PNs. When a person searches for a document, proper names are used in the query. If these PNs have not been recognized, the document cannot be found. These missing PNs can be very important for the understanding of the document.

 

In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is how to model relevant proper names for the audio document we want to transcribe.

 

- Missions:

 

We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs). The purpose of this work is to design a methodology how to find and model a list of relevant OOV PNs that correspond to an audio document.

 

Assuming that we have an approximate transcription of the audio document and huge text corpus extracted from internet, several methodologies could be studied:

  • From the approximate OOV pronunciation in the transcription, generate the possible writings of the word (phoneme to character conversion) and search this word in the text corpus.

  • A deep neural network can be designed to predict OOV proper names and their pronunciations with the training objective to maximize the retrieval of relevant OOV proper names.

 

The proposed approaches will be validated using the ASR developed in our team.

 

Keywords: deep neural networks, automatic speech recognition, lexicon, out-of-vocabulary words.

 

- Bibliography

[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.

[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.

[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.

[Li2017] J. Li, G. Ye, R. Zhao, J. Droppo, Y. Gong , ?Acoustic-to-Word Model without OOV?, ASRU, 2017.

 

 

- Skills and profile: PhD in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python).

- Additional information:

 

Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/

 

Additional links : Ecole Doctorale IAEM Lorraine

 

Deadline to apply: June 6th

Selection results: end of June

 

Duration :12 of months.

Starting date: between Nov. 1st 2018 and Jan. 1st 2019
Salary: about 2.115 euros net, medical insurance included

 

The candidates must have defended their PhD later than Sept. 1st 2016 and before the end of 2018. 

The candidates are required to provide the following documents in a single pdf or ZIP file: 

  • CV including a description of your research activities (2 pages max) and a short description of what you consider to be your best contributions and why (1 page max and 3 contributions max); the contributions could be theoretical or  practical. Web links to the contributions should be provided. Include also a brief description of your scientific and career projects, and your scientific positioning regarding the proposed subject.

  • The report(s) from your PhD external reviewer(s), if applicable.

  • If you haven't defended yet, the list of expected members of your PhD committee (if known) and the expected date of defence.

In addition, at least one recommendation letter from the PhD advisor should be sent directly by their author(s) to the prospective postdoc advisor.

 

Help and benefits:

 

  • Possibility of free French courses

  • Help for finding housing

  • Help for the resident card procedure and for husband/wife visa

Back  Top

6-51(2018-05-23) Postdoc and PhD positions at Saarland University, Germany

Postdoc and PhD positions at Saarland University
http://www.sfb1102.uni-saarland.de/

The CRC Information Density and Linguistic Encoding (SFB 1102) at Saarland University
invites applications for a range of PhD and post-doctoral positions available for its
second funding phase (7/2018-6/2022).

The CRC includes 16 research projects drawing upon computational linguistics,
psycholinguistics, sociolinguistics, diachronic linguistics, phonetics, discourse
linguistics, contrastive linguistics and translatology. We are seeking to recruit 7
Postdocs and 15 PhD students.

For the phonetics community, projects C1 and C4 will be most relevant, but you may want
to have a look at the other projects too.

Details on the projects and positions as well as instructions for applications are
available at

http://www.sfb1102.uni-saarland.de/?page_id=57

Application deadline: June 20, 2018
Starting date: flexible

Back  Top

6-52(2018-05-24) Permanent Web Developer position at ELDA, Paris, France

The European Language resources Distribution Agency (ELDA), a company specialised in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a permanent Web Developer position.

Under the supervision of the technical department manager, the responsibilities of the Web Developer consist in designing and developing web applications and software tools for linguistic data management.
Some of these software developments are carried out within the framework of European research and development projects and are published as free software.
Depending on the profile, the Web Developer could also participate in the maintenance and upgrading of the current linguistic data processing toolchains, while being hands-on whenever required by the language resource production and management team.

Profile:

  • Bachelor of Science (BAC + 3 / BAC + 4) in Computer Science or a related field
  • Proficiency in Python (at least 3 years of experience)
  • Hands-on experience in Django
  • Hands-on knowledge of a distributed version control system (Git preferred)
  • Knowledge of SQL and of RDBMS (PostgreSQL preferred)
  • Basic knowledge of JavaScript and CSS
  • Basic knowledge of Linux shell scripting
  • Practice of free software
  • Experience in natural language processing is a strong plus
  • Proficiency in French and English, with writing and documentation skills in both languages
  • Curious, dynamic and communicative, flexible to work on different tasks in parallel
  • Ability to work independently and as part of a multidisciplinary team
  • Citizenship (or residency papers) of a European Union country


Applications will be considered until the position is filled. The position is based in Paris.

Salary: Commensurate with qualifications and experience.
Benefits: complementary medical insurance; meal vouchers.

Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to:

ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Mail : job@elda.org

ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends.

For further information about ELDA/ELRA, visit:
http://www.elra.info

Back  Top

6-53(2018-05-26) DOCTEUR JUNIOR R&D INFORMATIQUE Synthèse vocale / Intelligence Artificielle

DOCTEUR JUNIOR R&D INFORMATIQUE Synthèse vocale / Intelligence Artificielle
 
RD2 Conseil est un cabinet de recrutement spécialisé sur la recherche de jeunes docteurs pour les besoins en R&D des PME innovantes et entreprises privées, souhaitant se doter de compétences scientifiques pointues et de réelles ressources humaines en matière d’Innovation.
 
Nous recrutons actuellement un(e) Docteur Junior H/F (1er CDI) en Informatique spécialisé sur des problématiques de traitement de la parole, en particulier par l’utilisation de techniques d’Intelligence Artificielle.
 
Notre client est une startup, basée à Paris et créée en 2014, développant et fabriquant un produit novateur et technologique dans le domaine du jouet pour enfants. La société a ainsi développé un objet technologique et interactif éveillant l’imaginaire des enfants sans leur imposer d’images, par le biais d’une « fabrique à histoires » permettant à l’enfant de sélectionner les paramètres de l’histoire qui lui sera racontée : personnage principal, scène, objets clés. Le produit est d’ores et déjà commercialisé auprès des grands acteurs de la distribution (Fnac, Nature et Découverte, Oxybulle…).
 
Notre client vise aujourd’hui à renforcer ses activités de Recherche & Développement par la mise en œuvre de développements technologiques focalisés sur : - la synthèse et la reconnaissance vocale d’une part, pour permettre une meilleure personnalisation des histoires -  le développement d’une intelligence artificielle d’autre part par la mise en relation automatique de thèmes et d’idées  Dans ce cadre, la société vise le recrutement d’un Docteur Junior (1er CDI impératif) en Informatique (H/F) disposant de compétences fortes en Traitement de la Parole et Intelligence Artificielle
 
Le candidat travaillera sur des problématiques de synthèse vocale et d’IHM liée à la voix.  L’entreprise souhaite dans un 1er temps pouvoir intégrer directement le nom de l’enfant dans les histoires racontées par l’objet. Il est donc nécessaire de pouvoir disposer d’un outil de synthèse vocale permettant la conversion de texte en voix (text-to-speech) qui soit d’une qualité suffisante pour que i) le nom de l’enfant soit prononcé correctement, ii) avec une voix très similaire à celle du narrateur et iii) avec une palette d’intonations correspondant au contexte de l’histoire.  Les dirigeants de la startup ont également identifié le besoin de proposer des interactions naturelles avec l’objet afin qu’il puisse être utilisé en totale autonomie. Cela amène l’entreprise à étudier la possibilité de le contrôler directement par la voix.
 
Nous recherchons un candidat très autonome, polyvalent, capable d’être force de propositions, d’être créatif et de prendre des initiatives et des responsabilités.  Enfin notre client portera une grande attention à la sociabilité du candidat : dans une petite équipe où l’ambiance est particulièrement conviviale, il est indispensable que le / la candidat(e) soit sociable, dynamique, agréable et ait le goût du travail en équipe et des interactions avec les autres. 
 
Localisation : Paris Rémunération envisagée : Selon profil Si vous pensez être cette personne, que vous êtes titulaire d’un Doctorat et n’avez jamais été embauché en CDI auparavant (contrainte impérative pour respecter les critères du CIR), nous vous invitons à nous faire parvenir votre CV et lettre de motivation par mail : jesuisunjeunedocteur@rd2conseil.com - sous la référence LNI

Back  Top

6-54(2018-05-29) PhD Project France-Australia

PhD Project – Call for Applications Situated Learning for Collaboration across Language Barriers

People working in development are often deployed to remote locations where they work alongside locals who speak an unwritten minority language. Outsiders and locals share knowhow and pick up phrases in each other’s languages. They are performing a type of situated learning of language and culture. This situation is found across the world, in developing countries, border zones, and in indigenous communities. This project will develop computational tools to help people work together across language barriers. The research will be evaluated in terms of the the quality of the social interaction, the mutual acquisition of language and culture, the effectiveness of cross-lingual collaboration, and the quantity of translated speech data collected. The ultimate goal is to contribute to the grand task of documenting world’s languages. The project will involve working between France and Australia, and will include fieldwork with a remote indigenous community. We’re looking for outstanding and highly motivated candidates to work on a PhD on this subject. Competencies in two or more of the following areas are mandatory:

• machine learning for natural language processing;

• speech processing for interactive systems;

• participatory design;

• mobile software development;

• documenting and describing unwritten languages.

The project will build on previous work in the following areas: mobile platforms for collecting spoken language data [6, 7]; respeaking as a technique for improving the value of recordings made ‘in the wild’ and an alternative to traditional transcription practices [12, 13]; machine learning of structure in phrase-aligned bilingual speech recordings [2, 3, 4, 8, 9, 10, 11]; participatory design of mobile technologies for working with minority languages [5]; managing multilingual databases of text, speech and images [1]. Some recent indicative PhD theses include: Computer Supported Collaborative Language Documentation (Florian Hanke, 2017); Automatic Understanding of Unwritten Languages (Oliver Adams, 2018); Collecter, Transcrire, Analyser : quand la Machine Assiste le Linguiste dans son Travail de Terrain (Elodie Gauthier, 2018); Enriching Endangered Language Resources using Translations (Antonios Anastasopoulos, in prep); Digital Tool Deployment for Language Documentation (Mat Bettinson, in prep); Bayesian and Neural Modeling for Multi Level and Crosslingual Alignment (Pierre Godard, in prep).
Details of the position. Funding includes remission of university fees, a stipend of approximately e17,500 per year, and a travel allowance. The position starts in Fall 2018 (ie from September) and lasts for three years. The research will be supervised by Steven Bird (Charles Darwin University, Australia) and Laurent Besacier (Univ. Grenoble Alpes, France). Acceptance will be subject to approval by both host institutions (Grenoble and Darwin). Given the cross-cultural nature of the project, the successful candidate will have demonstrated substantial experience of cross-cultural living.


Apply. To apply, please contact laurent.besacier@univ-grenoble-alpes.fr and steven. bird@cdu.edu.au including a cover letter, curriculum vitae, academic transcripts and reference letter by your MSc thesis advisor.


Institutions. The University of Grenoble offers an excellent research environment with ample compute hardware to solve hard speech and natural language processing problems, as well as remarkable surroundings to explore over the weekends. Charles Darwin University is a research-intensive university attracting students from over 50 countries. CDU is situated in Australia’s tropical north, in the midst of one of the world’s hot-spots for linguistic diversity and language endangerment. Darwin is a youthful, multicultural, cosmopolitan city in a territory that is steeped in Aboriginal tradition and culture and which enjoys a close interaction with the peoples of Southeast Asia.


References
[1] Steven Abney and Steven Bird. The Human Language Project: building a universal corpus of the world’s languages. In Proceedings of the 48th Meeting of the Association for Computational Linguistics, pages 88–97. ACL, 2010.

[2] Oliver Adams, Graham Neubig, Trevor Cohn, and Steven Bird. Learning a translation model from word lattices. In Interspeech 2016, pages 2518–22, 2016.

[3] Antonios Anastasopoulos, Sameer Bansal, David Chiang, Sharon Goldwater, and Adam Lopez. Spoken term discovery for language documentation using translations. In Proceedings of the Workshop on Speech-Centric NLP, pages 53–58, 2017.

[4] Antonios Anastasopoulos and David Chiang. A case study on using speech-to-translation alignments for language documentation. In Proc. Workshop on Use of Computational Methods in Study of Endangered Languages, pages 170–178, 2017.

[5] Steven Bird. Designing mobile applications for endangered languages. In Kenneth Rehg and Lyle Campbell, editors, Oxford Handbook of Endangered Languages. Oxford University Press, 2018.

[6] Steven Bird, Florian R. Hanke, Oliver Adams, and Haejoong Lee. Aikuma: A mobile app for collaborative language documentation. In Proceedings of the Workshop on the Use of Computational Methods in the Study of Endangered Languages. ACL, 2014.

[7] David Blachon, Elodie Gauthiera, Laurent Besacier, Guy-No¨el Kouaratab, Martine Adda-Decker, and Annie Rialland. Parallel speech collection for under-resourced language studies using the Lig-Aikuma mobile device app. In Proceedings of the Fifth Workshop on Spoken Language Technologies for Under-resourced languages, volume 81, pages 61–66, 2016.

[8] V. H. Do, N. F. Chen, B. P. Lim, and M. A. Hasegawa-Johnson. Multitask learning for phone recognition of underresourced languages using mismatched transcription. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 26:501–514, 2018.

[9] Ewan Dunbar, Xuan Nga Cao, Juan Benjumea, Julien Karadayi, Mathieu Bernard, Laurent Besacier, Xavier Anguera, and Emmanuel Dupoux. The zero resource speech challenge 2017. In Automatic Speech Recognition and Understanding (ASRU), 2017 IEEE Workshop on. IEEE.

[10] Long Duong, Antonios Anastasopoulos, David Chiang, Steven Bird, and Trevor Cohn. An attentional model for speech translation without transcription. In Proceedings of the 15th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 949–959, 2016.

[11] Pierre Godard, Gilles Adda, Martine Adda-Decker, Alexandre Allauzen, Laurent Besacier, Helene Bonneau-Maynard, Guy-No¨el Kouarata, Kevin L¨oser, Annie Rialland, and Franc¸ois Yvon. Preliminary experiments on unsupervised word discovery in Mboshi. In Interspeech 2016, 2016.

[12] Mark Liberman, Jiahong Yuan, Andreas Stolcke, Wen Wang, and Vikramjit Mitra. Using multiple versions of speech input in phone recognition. In ICASSP, pages 7591–95. IEEE, 2013.

[13] Anthony C. Woodbury. Defining documentary linguistics. Language Documentation and Description, 1:35–51, 2003.

Back  Top

6-55(2018-06-02) Ingénieur de recherche en Informatique, statistiques et calcul scientifique IN2P3
Le Laboratoire de Phonétique et Phonologie (http://lpp.in2p3.fr) ouvre un poste permanent d?ingénieur de recherche en Informatique, statistiques et calcul scientifique
Il s?agit d?un concours externe CNRS, dont les détails sont consultables à cette adresse : 
  • Date de candidature : du 4 juin au 3 juillet 2018
  • Concours n° 42
N?hésitez pas à envoyer un mail à angelique.amelot@univ-paris3.fr si vous souhaitez plus d?informations sur ce poste.

 
Back  Top

6-56(2018-06-04) Two funded PhD, University of Glasgow, Great Britain
The University of Glasgow welcomes applications for two funded PhDs in the area of Human-Robot Interaction.
 
Application deadline: 31 July (for both positions)
 
Position 1: Human-Robot Interaction for Oilfield Drilling Applications
Eligibility: UK/EU students only
Start date: 1 October 2018
 
This project will investigate how such human-robot collaborative tasks can be carried out, concentrating on the communication aspects: how the robot communicates its intentions to the human, and how the human can query and interact with the robot’s plan. The research will be driven by oilfield drilling applications, which involve control of complex equipment in a dynamic environment, with an increasing level of automation. Close coordination between the human crew and the automation system is often required, as is building trust between the human and the machine so that the crew understand why the machine acts the way it does and is confident it has taken all available information into account. The project is an EPSRC iCASE award with Schlumberger Gould Research and it is expected that the student will spend some time working with the company in Cambridge.
 
The student should have excellent experience, enthusiasm and skills in the areas of natural language or multimodal interaction and/or automated planning and reasoning. Applicants must hold a good Bachelor’s or Master’s degree in a relevant discipline.
 
 
Position 2: Natural Language Generation for Social Robotics
Eligibility: UK/EU students, or international students who can cover remaining fees from other sources
Start date: 1 January 2019 (or earlier)
 
In this PhD project, the student will investigate how advanced techniques drawn from natural language generation (NLG) can be combined with practical social robotics applications. The success of the integration will be evaluated through a combination of subjective user evaluations of the social robots as well as technical evaluations of the flexibility and robustness of the underlying systems. In addition to the scientific results of the PhD, an additional goal is to produce a reusable, open-source component for NLG in the context of social robotics, to allow other researchers in this area to benefit from the results of the research.
 
The PhD student should have excellent experience, enthusiasm and skills in the areas of natural language processing, computational linguistics, multimodal interaction, and/or human-robot interaction. Applicants must hold a good Bachelor’s or Master’s degree in a relevant discipline.
 
 
For more information about both of these positions, please contact Dr Mary Ellen Foster MaryEllen.Foster@glasgow.ac.uk
 
Back  Top

6-57(2018-06-05) Post doc position in Speech Processing , Aalto University, Finland

Aalto University (School of Electrical Engineering, Department of Signal Processing and Acoustics) invites applications for
 
Post doc position in Speech Processing
 
The Department of Signal Processing and Acoustics is a part of School of Electrical Engineering. The department consists of four main research areas. The speech communication technology research group (led by Prof. Paavo Alku) works on interdisciplinary topics aiming at describing, explaining and reproducing communication by speech. The main topics of our research are: voice source analysis and parameterization, statistical parametric speech synthesis, speech quality and intelligibility improvement, robust feature extraction in speech and speaker recognition, and occupational voice care.
We are currently looking for a postdoc to join our research team to work on the team’s research themes. We are particularly interested in candidates with research interest in paralinguistic speech processing, particularly speech-based biomarking of human health, voice conversion or speech synthesis. 
Postdoc: 3 years. Starting date: Autumn 2018 (flexible) 
In Helsinki you will join the innovative international computational data analysis and ICT community. Among European cities, Helsinki is special in being clean, safe, Scandinavian, and close to nature, in short, having a high standard of living. English is spoken everywhere. See, e.g., http://www.visitfinland.com/
Requirements The position requires doctoral degree in speech and language technology, computer science, signal processing or other relevant area, skills for doing excellent research in a group, and outstanding research experience in any of the research themes mentioned above. The candidate is expected to perform high-quality research and assist in supervising PhD students.
 
How to apply If you are interested in this opportunity, apply by submitting the following documents in English and in electrical form (use the pdf format only!) by July 31, 2018. Send your application, CV, a transcript of academic records and references directly by email to Professor Paavo Alku. Please insert the subject line “Aalto post-doc recruitment, 2018”.
 
Additional information Paavo Alku, paavo.alku@aalto.fi

Back  Top

6-58(2018-06-08) Analytic Linguistic Project Manager (French) ,Paris, France

Analytic Linguistic Project Manager (French)


  Job title:  Analytical Linguistic Project Manager    

Linguistic Field(s):  Morphology, Semantics, Syntax, Lexicography, NLP, Phonetics, Phonology    Location:  Paris, France  

Hours: 9H – 17H 

Rem: 3790*12

Job description:  The role of the Analytic Linguistic Project Manager is to consult with Natural Language Understanding Researchers on creating guidelines and setting standards for a variety of NLP projects as well as to manage the work of a team of junior linguists to achieve high quality data output.    This includes: 

● Reviewing and annotating linguistic data 

● Developing phonetic/phonemic transcription rules 

● Analyzing acoustic data to evaluate speech synthesis 

● Deriving NLP data for new and on-going projects 

● Training, managing, and overseeing the work of a team of junior linguists 

● Creating guidelines for semantic, syntactic and morphological projects 

● Consulting with researchers and engineers on the development of linguistic databases 

● Identifying and assigning required tasks for a project 

● Tracking and reporting the team's progress 

● Monitoring and controlling quality of the data annotated by the team 

● Providing linguistic/operational guidance and support to the team   

Job requirements: 

● Native speaker French and fluent in English 

● Master's degree or higher in Linguistics or Computational Linguistics with experience in semantics, syntax, morphology, lexicography, phonetics, or phonology 

● Ability to quickly grasp technical concepts; should have an interest in natural language processing 

● Excellent oral and written communication skills 

● Good organizational skills 

● Previous project management and people management experience 

● Knowledge of a programming language or previous experience working in a Linux environment    

CV + Motivation letter in English: Maroussia.houimli@adeccooutsourcing.fr

Back  Top

6-59(2018-06-11) Poste d’Ingénieur Traitement de la Parole , Voxygen, Plemeur Bodou, France


 
Vous voulez surfer sur la vague de la Voice First Revolution ?  Participez à l’aventure Voxygen !
 
Poste d’Ingénieur Traitement de la Parole
 
 
VOXYGEN, PME technologique éditrice d'une synthèse vocale reconnue pour ses qualités d’expressivité, renforce son équipe pour répondre aux besoins d’un marché en pleine expansion dans les secteurs de la relation client, des transports, de la robotique…
 
Nous ouvrons un poste d’Ingénieur Traitement de la Parole pour renforcer l'équipe Operations / Customer Success.
 
Principales missions :
 Définition et maintenance des outils de création de voix, documentation interne  Définition des paramètres acoustiques du système de synthèse dans un contexte multilingue  Maintenance des voix existantes  Contribution aux travaux de R&D sur le contrôle de l’expressivité en synthèse vocale  Qualification d’affaires, interface avec le commerce, accompagnement en avant-vente  Gestion de projets clients de création de voix
Profil recherché :
- Ingénieur en traitement de la parole - Maîtrise de la programmation : python, C, C++ - Connaissances en apprentissage automatique - Connaissances ou intérêt pour la linguistique - Bon niveau en anglais professionnel écrit et oral
Qualités : 
- Capacité d’adaptation dans une équipe pluridisciplinaire - Autonome, bonne capacité d’organisation sur un poste multitâche - Dynamique - Bon relationnel – Vous aimez le travail en équipe et la relation clients. 
Lieu de travail : Côte de Granit Rose – cadre de vie exceptionnel pour les amoureux de la mer et de la nature !  CDI basé à Pleumeur-Bodou (Lannion, 22) – démarrage ASAP – (Possibilité de poste sur Rennes après formation à Pleumeur-Bodou)
Rémunération : Selon expérience.
Merci d'adresser votre candidature (CV + motivations) à jobs@voxygen.fr

Back  Top

6-60(2018-06-17) Postdoc / engineer -Computer Science Research Laboratory (LaBRI) and SANPSY (Sleep - Addiction - Neuropsychatry) , Bordeaux

Position : Postdoc / engineer - 12 months, Bordeaux
       Starting date : 01/10/2018
      
------------

Profile : Speech processing, machine learning, artificial intelligence

------------

Location : primary : Computer Science Research Laboratory (LaBRI)
       secondary : SANPSY (Sleep - Addiction - Neuropsychatry)

------------

Supervisors : Jean-Luc Rouas - LaBRI : jean-luc.rouas@labri.fr - main contact
          Jean-Philippe Domenger - LaBRI
          Pierre Philip - SANPSY

------------

Project : 'IS-OSA' (Innovative digital solution for personalised treatment of sleep apnea syndrome) funded by the Nouvelle Aquitaine Region

------------

Project summary :

Sleep deprivation have a strong impact on physical and mental health leading to multiple consequences: increase of heart failure rates, cognitive and behavorial troubles... In addition to the clinical interviews, it is possible to measure the fatigue state using several cues : eye movement, EEG data, behavorial expression data (e.g. body movements). It is however nowadays feasible, thanks to recent advances in speech processing, to characterise fatigue states using only speech related cues. This technique have the main advantage that it does not require any specific or invasive apparatus and could thus be carried out in diverse enviroments, out of clinical context.

The project aims at following patients suffering from sleep apnea syndrome by using the data collected during interviews with a virtual doctor. This data will complement other data collection sources such as measurements from CPAP devices.
The aim of this work is to focus on vocal cues characterising excessive daytime sleepiness in order to determine which are the vocal biomarkers of these troubles that could be integrated in the clinical measurements carried out during interviews with virtual doctors developped at SANPSY.

-------------

Work plan:

- Carry out Audio and Video recordings of patients on-site at the hospital in Bordeaux
- Define the vocal parameters allowing to describe the troubles induced by excessive daytime sleepiness, in close collaboration with the SANPSY Lab.
- Study these parameters and use them in a excessive daytime sleepiness automatic classification framework using sleepiness measurements proposed by the clinical staff as ground truth.
- Implement the classification system in the virtual doctor framework developped at SANPSY and carry out clinical trials to validate the approach.

-------------

Required skills:

- Training (PhD or Master internship) in automatic speech processing, contributions in analysis/features for speech signals are expected
- Machine learning and Artifical Intelligence : good knowledge of standard techniques (such as GMM/HMM/LDA) and knowledge/keen interest in deep learning methods
- Good programming skills in python, C/C++
- Interest for clinical research / collaboration with clinical staff (flexible hours)
- Good command of professionnal english

-------------

Salary : according to diploma and experience (examples : Master+3y = 2137? gross/month, PhD+3y = 2511 ? gross/month)

-------------

How to apply : Send your CV + cover letter + referees names + reports (interships,thesis,...) or publications by email to jean-luc.rouas@labri.fr

Back  Top

6-61(2018-06-25) Associate Linguist [French], Paris France

Job Title:

Associate Linguist [French]

Linguistic Field(s):

Phonetics, Phonology, Morphology, Semantics, Syntax, Lexicography, NLP

Location:

Paris 8, France

Contract:

Short term contract 1 year -  renewable contract

Job description:

The role of the Associate Linguist is to annotate and review linguistic data in French.  The Associate Linguist will also contribute to a number of other tasks to improve natural language processing. The tasks include:

  • Providing phonetic/phonemic transcription of lexicon entries
  • Analyzing acoustic data to evaluate speech synthesis
  • Annotating and reviewing linguistic data
  • Labeling text for disambiguation, expansion, and text normalization
  • Annotating lexicon entries according to guidelines
  • Evaluating current system outputs
  • Deriving NLP data for new and on-going projects
  • Be able to work independently with confidence and little oversight

Minimum Requirements:

  • Native speaker of French and fluent in English
  • Extensive knowledge of phonetic/phonemic transcriptions
  • Familiarity with TTS tools and techniques
  • Experience in annotation work
  • Knowledge of phonetics, phonology, semantics, syntax, morphology or lexicography
  • Excellent oral and written communication skills
  • Attention to detail and good organizational skills

Desired Skills:

  • Degree in Linguistics or Computational Linguistics or Speech processing
  • Ability to quickly grasp technical concepts; learn in-house tools
  • Keen interest in technology and computer-literate
  • Listening Skills
  • Fast and Accurate Keyboard Typing Skills
  • Familiarity with Transcription Software
  • Editing, Grammar Check and Proofing Skills

-          Research Skills

Salary : 2730?

CV + motivation letter in English: maroussia.houimli@adeccooutsourcing.fr

 

Bien à vous,

Maroussia HOUIMLI

Responsable recrutement

Accueil en entreprise & Evénementiel et Marketing-Vente

  

T 06.24.61.08.43

E maroussia.houimli@adeccooutsourcing.fr

 

Back  Top

6-62(2018-07-08) 2-year post-doc in ASR for low-resource languages, Delft University, The Netherlands

Job opening: 2-year post-doc in ASR for low-resource languages

We are looking for a highly motived post-doctoral researcher in the area of automatic
speech recognition (ASR) for low-resource languages, as part of the newly started
?Human-inspired automatic speech recognition? lab of Dr. Odette Scharenborg at the
Technical University of Delft, The Netherlands.

This project concerns building ASR systems for low-resource languages using linguistic
knowledge. The project aims to investigate different learning and training strategies
(e.g., semi- vs. unsupervised learning, multi-task learning) and architectures of deep
neural networks (DNNs) for the task of low-resource ASR. An important focus of the
project is on the role of linguistic information and multi-linguality in building ASR
systems for low-resource languages. A second important aspect of the project is opening
the DNN ?Black box? by investigating the speech representations in the hidden layers of
the DNNs using visualization techniques, and subsequently using this information to
improve the ASR systems.

We are looking for a highly motivated individual with a strong background in:
-        Deep neural networks
-        Automatic speech recognition

Who preferably has knowledge of one or more of the following topics:
-        Visualization of DNNs
-        Different DNN architectures and training techniques
-        Semi-/unsupervised learning
-        Speech acoustics

Who has/is:
-        A PhD in Computer Science, Electrical Engineering, Computational Linguistics,
Artificial Intelligence or a related field
-        A strong analytical mind
-        Excellent verbal and written communicative skills in English
-        At least 2 journal published journal papers in high-impact journals or conferences as
first author
-        A strong team-worker

We offer a 2-year post-doctoral position in the Multimedia Computing Group, Department of
Intelligent Systems, Faculty of Electrical Engineering, Mathematics, and Computer
Science, Delft University of Technology, the Netherlands.

For inquiries, please contact Dr. Odette Scharenborg (o.e.scharenborg@tudelft.nl).
Applications should be send to o.e.scharenborg@tudelft.nl before August 13, 2018, and
should include:
-        CV
-        Motivation letter
-        List of publications
-        Names and addresses of three referees.

The estimated starting date is October 1, 2018 or as soon as possible after that.
Interviews will likely be held in the week of August 20-24, 2018.

Dr. Odette Scharenborg
Associate Professor and Delft Technology Fellow
Multimedia Computing Group, Faculty of Electrical Engineering, Mathematics, and Computer
Science
Delft University of Technology
The Netherlands

Back  Top

6-63(2018-07-09) Two academic positions at NTNU,Norwegian University of Science and Technology, Trondheim, Norway

Professor/Associate Professor in Statistical Machine Learning for Speech Technology at NTNU, Norwegian University of Science and Technology, Trondheim, Norway

Faculty position targeting machine learning for pattern recognition, with particular emphasis on applications in speech and language technology at the Signal Processing Group, NTNU. Application deadline is Aug. 31, 2018. See https://www.jobbnorge.no/en/available-jobs/job/154954/professor-associate-professor-in-statistical-machine-learning-for-speech-technology-ie-138-2018 for further information.

 

 

Professor/Associate Professor in Statistical Machine Learning for Signal Processing at NTNU, Norwegian University of Science and Technology, Trondheim, Norway

Faculty position targeting machine learning for analysis, classification, prediction and data mining of (large amounts of) sensor data, typically measurements in time and/or space at the Signal Processing Group, NTNU. Application deadline is Aug. 31, 2018. See https://www.jobbnorge.no/en/available-jobs/job/154952/professor-associate-professor-in-statistical-machine-learning-for-signal-processing-ie-137-2018  for further information.

 

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA