ISCA - International Speech
Communication Association


ISCApad Archive  »  2017  »  ISCApad #230  »  Jobs

ISCApad #230

Thursday, August 10, 2017 by Chris Wellekens

6 Jobs
6-1(2017-03-14) PhD and postdocs positions at INRIA/Nancy France

Our team has several openings for PhD students and postdocs in the fields of deep
learning based:
- speech enhancement
- speech recognition
- environmental sound analysis

For details and to apply, see:
https://team.inria.fr/multispeech/category/job-offers/

Application deadline:
- April 15 for postdoctoral positions
- April 30 for PhD positions

--
Emmanuel Vincent
Multispeech Project-Team
Inria Nancy - Grand Est
615 rue du Jardin Botanique, 54600 Villers-lès-Nancy, France
Phone: +33 3 8359 3083 - Fax: +33 3 8327 8319
Web: http://members.loria.fr/evincent/
Top

6-2(2017-03-18) Fully-funded PhD Positions in Automatic Emotion Recognition at SUNY, Albany, NY, USA


 Fully-funded PhD Positions in Automatic Emotion Recognition at SUNY
 
Application deadline: 22 March 2017 (**see below for more information**)    

We have several PhD research assistantship positions available at the State University of New York, Albany. We are seeking highly creative and motivated applicants with a keen interest in doing research in human-centered technology, affective computing, and automatic emotion recognition using machine learning and multimodal signal processing techniques.  

Requirements: - A bachelor's degree in a relevant field (Electrical and Computer Engineering, Computer Science, Statistics, or related) - Solid background in computer programming - Proficiency in spoken and written English - (Preferred) Knowledge in the following technologies: MATLAB, Python, Java, Perl, C++, Unity - (Preferred) Previous coursework and/or practical experience in machine learning - (Preferred) Solid background in mathematics and/or statistics     Interest in one of the following areas: - Human-Centered and Affective Computing, Computational Human Behavior Analysis - Machine Learning, Statistics, Applied Mathematics - Speech Processing, Computer Vision  
 
We expect: - Keen interest in top level conference and journal publications - Self-organized, team worker, with good communication skills  
 
We offer: - You will work at one of the leading U.S. Universities and have the opportunity to work towards your PhD in a group of excellent scientists - Tuition, stipend, and fringe benefits - You will get financial support to attend and present at top level international conferences - Visas will be fully funded for international students    

To apply, please send an email to Prof. Yelin Kim (yelinkim@albany.edu) including a CV and a research statement (max. 2 pages) by March 22, 2017. We have rolling admissions policies, so please apply as early as possible. Please give your email the subject “SUNY PhD Research Assistantship in Automatic Emotion Recognition.'    

Please liberally forward and share to possibly interested candidates or people that might know suitable candidates.

Top

6-3(2017-03-20) Ph D position at IRISA Rennes, France

The Expression team of IRISA is recruiting a PhD candidate in computer science on the subject 'Universal speech synthesis through embeddings of massive heterogeneous data'. This work focuses on the following domains:

- Text-to-speech

- Deep learning

- High-dimensional indexing.

 

Details are given here: http://www.irisa.fr/en/offres-theses/universal-speech-synthesis-through-embeddings-massive-heterogeneous-data .

 

Application deadline: Monday, 3 April 2017.

 

Application process:

- CV

- Transcript of M.Sc. marks/grades

- to gwenole.lecorve@irisa.fr, damien.lolive@irisa.fr, laurent.amsaleg@irisa.fr .

 

Top

6-4(2017-03-25) Offre de thèse en Systèmes d'interaction vocale , LIA, Avignon France

***** Offre de thèse en Systèmes d?interaction vocale *****
au LIA/CERI Univ. Avignon  Prof. F. Lefèvre et B. Jabaian

Améliorer l'interaction vocale avec le monde numérique et la conception
de nouveaux services de dialogue homme-machine sont des défis essentiels
pour un passage total vers une société numérique. Parmi les activités de
recherche en intelligence artificielle portant sur les interactions
vocales, plusieurs questions importantes sont encore mal examinées et
peuvent faire l?objet de différentes études. Le LIA traite de multiples
aspects liés à l?interaction vocale et cherche à travers cette thèse à
approfondir la recherche dans l?une des ces grandes problématiques parmi :

** Le dialogue argumentatif **
pour rendre les systèmes artificiels capables d'apprendre à partir des
données, deux hypothèses fortes sont généralement faites : (1) la
stationnarité du système : on suppose que l'environnement de la machine
ne changera pas avec le temps. (2) l'interdépendance entre la collecte
des données et le processus d'apprentissage : cela implique que
l'utilisateur ne modifie pas son comportement dans le temps alors que ce
dernier a tendance à adapter son comportement en fonction de la réaction
de la machine. Il est clair que ce comportement n'aide pas un système
d'apprentissage artificiel à trouver l'équilibre lui permettant de
satisfaire au mieux les attentes de l'utilisateur.

Les interfaces vocales actuelles, basées sur des processus de décision
markovien partiellement observables, doivent évoluer vers une nouvelle
génération de systèmes interactifs, capables d'apprendre dynamiquement à
partir d'interactions sur le long terme, tout en tenant compte que le
comportement des humains est variable, étant eux-mêmes des systèmes
adaptatifs. En effet, les humains apprennent également de leurs
interactions avec un système et changent leur comportement au cours du
temps. Un tel système sera capable de discuter avec l?humain et
argumenter pour défendre ses choix.

** L?agent dialoguant autoritaire **
L'intelligence artificielle est généralement vue à travers sa soumission
aux désirs/volontés de l'humain, il existe toutefois des situations où
artificiellement doter la machine d'une dimension autoritaire peut être
pertinent (games et serious games principalement, mais aussi simulation
de contrôle...). Des mécanismes concrets permettant de développer un
agent autoritaire (dans l'objectif d'imposer son point de vue à
l'utilisateur) seront étudiés et mis en oeuvre en pratique pour
permettre leur évaluation complète.

** La réalité virtuelle pour la simulation d'agents dialoguant **
Une autre piste de recherche concerne les possibilités offertes par la
réalité virtuelle pour permettre l'apprentissage d'agent vocaux
dialoguant. L'objectif initial est d'offrir un cadre unifié pour le
développement en conditions d'utilisation de systèmes de dialogue situés
par le biais de simulations en réalité virtuelle des environnements
envisagés, éliminant ainsi la nécessité de les recréer. A terme
l'approche permettra aussi de développer des systèmes de dialogue pour
les applications de réalité virtuelle elle-mêmes. Le travail implique
donc des compétences dans les deux domaines de la réalité virtuelle et
du traitement automatique du langage.

Le candidat doit avoir un master en informatique avec une composante sur
les méthodes d'apprentissage automatique et/ou sur l?ingénierie de la
langue. La bourse de thèse fera l?objet d?un concours au sein de l?Ecole
Doctorale 536 de l?université d?Avignon, avec une audition du candidat
retenu par les encadrants de thèse.

Pour postuler merci d?envoyer un mail avant le 30 avril 2017 à Fabrice
Lefèvre (fabrice.lefevre@univ-avignon.fr) et Bassam Jabaian
(bassam.jabaian@univ-avignon.fr) incluant : votre CV, une lettre de
motivation avec  votre positionnement sur les propositions d?études
ci-dessus, d?éventuelles lettres de recommandation et vos relevés de notes.

Top

6-5(2017-03-28) Research Scientist, Spoken and Multimodal Dialog Systems, ETS, S.Francisco, CA, USA

Open Rank Research Scientist, Spoken and Multimodal Dialog Systems

ETS (Educational Testing Service) is a global not for profit organization whose mission is to advance quality and equity in education. With more than 3,400 global employees, we develop, administer and score more than 50 million tests annually in more than 180 countries.

Our San Francisco Research and Development division is seeking a Research Scientist for our Dialog, Multimodal, and Speech (DIAMONDS) research center. The center’s main focus is on foundational research as well as on development of new capabilities to automatically score spoken, interactive, and multimodal test responses in conversational settings in a wide range of ETS test programs, promote learning and other educational areas. This is an excellent opportunity to be part of a world-renowned research and development team and have a significant impact on existing and next generation spoken and multimodal dialog systems and their application to assessment and other areas in education.

Primary responsibilities include:

  • Developing and collaborating on interdisciplinary projects that aim to transfer techniques to a new context or scientific field. Successful candidates are self-motivated and self-driven, and have a strong interest in emerging conversational technology that can contribute to education in assessment and instructional settings.

  • Providing scientific and technical skills to conceptualize, design, obtain support for, conduct, and manage new research projects, grants, or parts of existing projects.

  • Generating or contributing to new or modified methods that support research on and development of spoken and multimodal dialog systems and related technologies relevant in assessment and instructional settings.

  • Designing and conducting scientific studies and functioning as an expert in the major facets of the projects: responding as a subject matter expert in presenting the results of acquired knowledge and experience.

  • Developing or assisting in developing proposals for external and internal research grants and obtain financial support for new or continuing research activities. Prepare initial and final proposal and project budgets.

  • Participating in dissemination activities through the publications of research papers in peer-reviewed journals and in the ETS Research Report series, the issuing of progress and technical reports, the presentation of seminars at major conferences and at ETS, or the use of other appropriate communication vehicles, including patents, books and chapters, that impact practice in the field or at ETS.

Depending on experience this position is open to entry level candidates as well as mid-level and senior level professionals.

REQUIREMENTS FOR A JUNIOR LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology or a related field is required. One year of research experience is required, in education is desirable. Experience can be gained through doctoral studies. Candidates should be very skilled in programming and be able to work effectively as a research team member.

 

 

 

REQUIREMENTS FOR A MID-LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology, or a related field is required. Research experience in education is desirable. Candidates should be very skilled in programming and be able to work effectively as a research team member. Three years of progressively independent substantive research in the area of computer science, linguistics, cognitive psychology, or education are required.

REQUIREMENTS FOR A SENIOR-LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology, or a related field is required. Research experience in education is desirable. Candidates should be very skilled in programming and be able to work effectively as a research team member. Eight years of progressively independent substantive research in the area of computer science, linguistics, cognitive psychology, or education are required.

We offer a competitive salary, comprehensive benefits and excellent opportunities for professional and personal growth. For a full list of position responsibilities and to apply please visit the following link: http://ets.pereless.com/careers/index.cfm?fuseaction=83080.viewjobdetail&CID=83080&JID=235623&BUID=2538

ETS is an Equal Opportunity Employer

Top

6-6(2017-04-10) 3 Funded PhD Research Studentships at CSTR, Edinburgh, Scotland, UK

Three Funded PhD Research Studentships at the Centre for Speech Technology Research,
University of Edinburgh.

Please see http://www.cstr.ed.ac.uk/opportunities for full details, eligibility
requirements, application procedure and deadlines.

1. Embedding enhancement information in the speech signal

Speech becomes harder to understand in the presence of noise and other distortions, such
as telephone channels. This is especially true for people with a hearing impairment. It
is difficult to enhance the intelligibility of a received speech+noise mixture, or of
distorted speech, even with the relatively sophisticated enhancement algorithms that
modern hearing aids are capable of running. A clever way around this problem might be for
the sender to add extra information to the original speech signal, before noise or
distortion is added. The receiver (e.g., a hearing aid) would use this to assist speech
enhancement.

Funding: Marie Sklodowska-Curie fellowship


2. Broadcast Quality End-to-end Speech Synthesis

Advances in neural networks made jointly in the fields of automatic speech recognition
and speech synthesis, amongst others, have led to a new understanding of their
capabilities as generative models. Neural networks can now directly generate synthetic
speech waveforms, without the limited quality of a vocoder. We have made separate
advances, using neural networks to discover representations of spoken and written
language that have applications in lightly-supervised text processing for almost any
language, and for adaptation of speaker identity and style. The project will combine
these techniques into a single end-to-end model for speech synthesis. This will require
new techniques to learn from both text and speech data, which may have other
applications, such as automatic speech recognition.

Funding: EPSRC Industrial CASE award (in collaboration with the BBC)


3. Automatic Extraction of Rich Metadata from Broadcast Speech (in collaboration with the
BBC)

The research studentship will be concerned with automatically learning to extract rich
metadata information from broadcast television recordings, using speech recognition and
natural language processing techniques.  We will build on recent advances in
convolutional and recurrent neural networks, using architectures which learn
representations jointly, considering both acoustic and textual data. The project will
build on our current work in the rich transcription of broadcast speech using neural
network based speech recognition systems, along with neural network approaches to machine
reading and summarisation.  In particular, we are interested in developing approaches to
transcribing broadcast speech in a way appropriate to the particular context.  This may
include compression or distillation of the content (perhaps to fit in with the
constraints of subtitling), transforming conversational speech into a form that is more
easy to read as text, or transcribing broadcast speech in a way appropriate for a
particular reading age.

Funding: EPSRC Industrial CASE award (in collaboration with the BBC)


--

Top

6-7(2017-04-20) Postdoc for project LaDyCa, Sorbonne, Paris

Applicants must have a PhD in linguistics as well as publications in their field of specialization. Independent research experience in one or several of the core areas of the LaDyCa project (i.e. language dynamics, linguistic typology, sociolinguistics, geolinguistics, dialectology & dialectometry) is expected. An experience in working with scholars of diverse backgrounds, e.g. linguists, sociologists, anthropologists, historians and, to some extent, mathematicians or statisticians would be greatly appreciated.The project will be funded by the IDEX (?Initiative d?Excellence?) consortium of Sorbonne Universités, France, in partnership with Ilia State University, Tbilisi, Georgia. Apart from an efficient and fluent command of English and/or French, for collegial relations. with an international team of scholars, applicants should have a good command of Georgian (written & oral skills); efficient reading skills in Russian would be an asset too. A good command of database software, and previous training or experience in computational linguistics would be also appreciated. A strong performing ability in entering data and in designing linguistic databases would be an asset.
Applications should include a statement of interest (letter of motivation), giving accurate details on the applicant?s skills corresponding to the aim of the LaDyCa project, and
how (s)he plans to process data with computing tools and gather information on the ecological, historical and social context of linguistic diversity in the Caucasus. (S)he will also
provide a CV including a list of publications, a copy of the PhD certificate, and the names and e-mail addresses of two referees. Applications should be sent as a single PDF file to the e-
mail addresses below, entitled ?Application_LaDyCa_PostDoc?:

Prof. Jean Léo Léonard < leonardjeanleo@gmail.com >
Prof. Claude Montacié < Claude.Montacie@paris-sorbonne.fr >
Deadline: applications must be submitted by 2 nd of May 2017.

The position is available from July 2017 to June 2018.The duration of employment is intended to last one year.
Net salary: around 2100 euros per month.

http://www.stih.paris-sorbonne.fr/?p=1203

Top

6-8(2017-04-22) Poste d'ATER à Paris Sorbonne, France

un poste d'ATER en Traitement automatique des langues et de la Parole est disponible à
l'Université Paris-Sorbonne. Le lien pour postuler est
http://concours.univ-paris4.fr/PostesAter?entiteBean=posteCandidatureCourant&modif=839.

Les conditions pour candidater sont disponibles sur www.paris-sorbonne.fr/ater

Top

6-9(2017-04-23) Associate research scientist-Speech at ETS, Princeton, New Jersey, USA

http://ets.pereless.com/careers/index.cfm?fuseaction=83080.viewjobdetail&CID=83080&JID=243925&type=&cfcend

 

Associate Research Scientist - Speech

 
Back to Listings Add Job to Basket
(0 Jobs)
Date Updated: April 25, 2017
Location: Princeton, NJ
Job Type: Full-Time/Regular
Travel: Not Specified
Position ID: 243925
Job Level: Entry Level(less than 2 years)
Years of Experience: Less Than 1 Year
Level of Education: Doctoral Degree
Starting Date : ASAP
 
 

Job Description

 

ETS is the world’s premier educational measurement institution and a leader in educational research. As an innovator in developing achievement and occupational tests for clients in business, education, and government, we are determined to advance educational excellence for the communities we serve.

 

ETS's Research & Development division has an opening for a research scientist in the NLP, Speech & DIAMONDS (Dialog, Multimodal, and Speech) research group. The projects in this research group focus on the application of NLP & Speech processing algorithms in automated scoring capabilities for assessment tasks involving constructed responses (such as essays and spoken responses) as well as on the application of spoken and multimodal dialog systems to assessment tasks. This is an excellent opportunity to be part of a world-renowned research and development team and have a significant impact on existing and next generation NLP & Speech systems and their application to assessment.

 

BASIC FUNCTIONS AND RESPONSIBILITIES

 

Take responsibility for conceptualizing, proposing, obtaining funding for, and directing small projects in the areas of speech processing, speech recognition, and automated speech scoring and/or assisting in moderate-to-major speech research projects. 

 

Projects may include (1) research projects, (2) development projects that use scientific principles to create (a) tools to improve the efficiency or quality of the practice of test development or statistical analysis: (b) innovative item types: or (c) the scoring of responses to open-ended items and (3) development projects that use scientific principles to create new products or product prototypes. Small research and development projects typically have minimal budgets, few or no staff other than the project director, a timeline of a year or less, and a single deliverable that is relatively narrow in scope. Major projects have substantial budgets, involve the coordination of many individuals internal and possibly external to ETS, may run across years, and may produce multiple deliverables.  Moderate projects fall in between these two types.

 

Assist in generating or contributing to new knowledge or capability in the field of speech processing, speech recognition, and spoken language technology, and in applying that new knowledge and capability to existing and/or new ETS products and services. New knowledge may take the form of new or modified educational or psychological theories: new research methodology: new development methodology: new statistical, analytic or interpretative procedures: new test designs and item types: new approaches to scoring examinee responses: and new approaches to reporting. New capabilities include developing software to instantiate new and existing knowledge. 

 

Document and disseminate the results of research and/or development projects through publication and presentation. Publication includes peer-review journals, peer-review conference proceedings, patents, books and book chapters, and other print media. Presentation may be at international, national, or regional conferences, client meetings, and ETS seminars.

 

  • Participate in setting substantive research and development goals and priorities for a group or initiative within a vice presidential area.

  • Actively seek input from peers on the quality of one’s work. Participate as a reviewer of others’ work.

  • Actively seek mentoring from more senior scientific and other R&D staff, developing a continuing mentoring relationship. 

  • Develop proposals and budgets for small projects and/or assist in development for moderate-to-major ones.

  • Assist more senior scientific staff in consulting on testing program, R&D management, or other ETS management concerns.

  • Manage small projects, and/or assist in the management of moderate-to-major ones, by accomplishing directed tasks according to schedule and within budget.

  • Develop external professional relationships and work to cultivate a scientist’s identity. 

  • Become a member and regular presenter at the annual meetings of one or more organizations substantively related to the work of ETS. 

 

Experience and Skills

EDUCATION

A Ph.D. in Computer Science, Electrical Engineering, Natural Language Processing, Computational Linguistics, or a similar area with major education in speech technology, and particularly in speech recognition is required.

EXPERIENCE

  • Evidence of independent substantive research experience in spoken language technology and/or development experience for deploying speech technology capabilities is required.

  • One year of independent substantive research experience in spoken language technology and/or development experience for deploying speech technology capabilities is required.

  • Experience may be gained through doctoral studies.

  • Practical expertise with automatic speech recognition systems, experience with machine learning toolkits (e.g., Weka, scikit-learn), and fluency in at least one major programming language (e.g. Java, Python) is required.

  • Practical experience with deep learning paradigms and/or deep-learning-based speech recognition systems (e.g., Kaldi) is highly desirable.

Our strength and success are directly linked to the quality and skills of our diverse associates.  A background and/or knowledge of accessibility and accommodations for individuals with disabilities, whether through your own experiences or those of someone close to you, is highly desirable.

ETS is an Equal Opportunity/Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, or other characteristic protected by law.

 
Top

6-10(2017-05-02) PhD at IRISA, Rennes, France

L'équipe Expression de l'IRISA ouvre un poste de doctorant en informatique sur le sujet 'caractérisation de registres de langue par extraction de motifs séquentiels' dans le cadre du projet ANR TREMoLo.

 

Domaines : traitement automatique des langues et fouille de données.

 

Détails de l'offre : http://www.irisa.fr/fr/offres-theses/caracterisation-registres-langue-extraction-motifs-sequentiels

 

Date limite de candidature : vendredi 2 juin.

 

Dossier de candidature (* : éléments obligatoires) :

- CV détaillé*

- lettre de motivation*

- relevés de notes (avec classement si possible)*

- contacts pour recommandation*

- rapport(s) de stage recherche (si applicable).

 

Envoyer à : del.battistelli@gmail.com, nicolas.bechet@irisa.fr, gwenole.lecorve@irisa.fr.

Top

6-11(2017-05-05) Post-doctoral positions in Multimodal Behavior Analysis: Speech, Vision and Healthcare, CMU, Pittsburgh, PA, USA

Post-doctoral positions in Multimodal Behavior Analysis: Speech, Vision and Healthcare

              Carnegie Mellon University, School of Computer Science

 

Multiple post-doctoral positions are available in the School of Computer Science at Carnegie Mellon University. We are seeking creative and energetic applicants for two-year postdoctoral positions. The positions include a competitive salary with full benefits and travel resources.

 

Candidates must have a strong research track record for one or more of the following topics: (1) speech and paralinguistic processing for affect, emotion and human behavior analysis, (2) automatic recognition of facial expressions, gestures and human visual activities, (3) multimodal machine learning algorithms for text, audio and video, (4) technologies to help clinicians with mental health diagnoses and treatments.

 

Required

  • PhD in computer science or related field (at the time of hire)
  • International applicants welcome! No US citizenship requirement.

 

Desired

  • Publications in top machine learning, speech processing and/or computer vision conferences and journals.
  • Research involving clinical patients with mental health disorders (e.g., depression, schizophrenia, suicidal ideation)
  • Experience mentoring graduate and undergraduate students

 

Job details

  • Preferred start date: September 1st, 2017 (negotiable)
  • Candidate will work under the supervision of Dr. Louis-Philippe Morency, CMU MultiComp Lab’s director
  • Competitive salary with full benefits and travel resources.

 

How to apply

  • Email applications should be sent to morency@cs.cmu.edu with the title “Postdoc application”, preferably before June 12th, 2017. The email should include:
    • a brief cover letter (with expected date of availability),
    • a CV including list of publications,
    • contact information of two references,
    • links to three representative publications
Top

6-12(2017-05-10) CDI Ingénieur docteur en informatique ou sciences du langage, LNE, Trappes, France

 

 

Ingénieur docteur en informatique ou sciences du langage

CDI – TRAPPES

 

 

 

Référence:AP/TAI/DE

 

L’entreprise: WWW.LNE.FR

 

Leader dans l’univers de la mesure et des références, jouissant d’une forte notoriété en France et à l’international, le LNE soutient l’innovation industrielle et se positionne comme un acteur important pour une économie plus compétitive et une société plus sûre. Au carrefour de la science et de l’industrie depuis sa création en 1901, le LNE offre son expertise à l’ensemble des acteurs économiques impliqués dans la qualité et la sécurité des produits.

Pilote de la métrologie française, notre recherche est au cœur de notre mission de service public et constitue un facteur fondamental au soutien de la compétitivité des entreprises.

Nous avons à cœur de répondre aux exigences des industriels et du monde académique, pour des mesures toujours plus justes, effectuées dans des conditions de plus en plus extrêmes ou sur des sujets innovants tels que les véhicules autonomes, les nanotechnologies ou la fabrication additive.

 

Le LNE en quelques chiffres: 700 collaborateurs.

5 métiers (la mesure, les essais, la certification, la formation et la R&D).

8 domaines d’intervention (Métrologie, Santé, Bâtiment, Environnement, Energie, Transports, Sécurité et Défense, Biens de consommation).

55 000 m2 de laboratoires (dont 10 000m2 à Paris et 45 000m2 à Trappes).

7 implantations (2 sites en Ile de France, 2 délégations régionales à Poitiers et Nîmes, 1 antenne  à St Etienne, 2 filiales à Washington, Hong Kong).

9000 clients. 

 

 

Missions :



Le docteur sera intégré à une équipe de 4 ingénieur-docteurs qui encadrent différents stagiaires et doctorants. Cette équipe est historiquement spécialiste de l’évaluation des systèmes de traitement de l’information multimédia (transcription de parole, reconnaissance du locuteur, dialogue, traduction…). Elle s’ouvre aujourd’hui à de nouveaux enjeux que sont l’évaluation des systèmes d’intelligence artificielle en général (robotique, smart-grid, domaine de la défense, véhicule autonome…).

 

Le docteur se verra attribuer les missions suivantes :

 

  • Le développement de la R&D en évaluation de systèmes de traitement de la parole et du langage

    • Définition de nouvelles métriques

    • Analyse de corpus

    • Publication de résultats scientifiques

    • Mise en place de protocoles perceptifs

    • Contribution au montage et déroulement de projets de recherche européens et nationaux.

 

  • Animation des campagnes d’évaluation

    • Aide aux équipes participantes pour l’utilisation des outils du LNE

    • Contrôle formel des données

    • Scoring des systèmes

    • Organisation de rencontres scientifiques et industrielles

    • Rédaction des rapports d’évaluation

 

  • Encadrement de stagiaires, post-docs

 

 

Profil :

 

Titulaire d’un doctorat en Informatique ou Sciences du langage, vous avez des compétences en traitement automatique de la langue ou en linguistique de corpus. Vous maitrisez également la programmation (R ou S, C++, PYTHON).

Vous êtes doté de bonnes qualités rédactionnelles et relationnelles. Vous avez une bonne communication orale et vous aimez travailler en collaboration avec votre équipe et les clients.

Vous avez un anglais vous permettant une communication professionnelle.

 

Déplacements en région parisienne, 1 jour par semaine et dans le monde 1 fois par an.

 

 

Pour déposer votre candidature :envoyer CV+LM à recrut@lne.fr – réf AP/TAI/DE

Top

6-13(2017-05-10) Open Rank Research Scientist, Spoken and Multimodal Dialog Systems, ETS, San Francisco, CA, USA

Open Rank Research Scientist, Spoken and Multimodal Dialog Systems

ETS (Educational Testing Service) is a global not for profit organization whose mission is to advance quality and equity in education. With more than 3,400 global employees, we develop, administer and score more than 50 million tests annually in more than 180 countries.

Our San Francisco Research and Development division is seeking a Research Scientist for our Dialog, Multimodal, and Speech (DIAMONDS) research center. The center’s main focus is on foundational research as well as on development of new capabilities to automatically score spoken, interactive, and multimodal test responses in conversational settings in a wide range of ETS test programs, promote learning and other educational areas. This is an excellent opportunity to be part of a world-renowned research and development team and have a significant impact on existing and next generation spoken and multimodal dialog systems and their application to assessment and other areas in education.

Primary responsibilities include:

  • Developing and collaborating on interdisciplinary projects that aim to transfer techniques to a new context or scientific field. Successful candidates are self-motivated and self-driven, and have a strong interest in emerging conversational technology that can contribute to education in assessment and instructional settings.

  • Providing scientific and technical skills to conceptualize, design, obtain support for, conduct, and manage new research projects, grants, or parts of existing projects.

  • Generating or contributing to new or modified methods that support research on and development of spoken and multimodal dialog systems and related technologies relevant in assessment and instructional settings.

  • Designing and conducting scientific studies and functioning as an expert in the major facets of the projects: responding as a subject matter expert in presenting the results of acquired knowledge and experience.

  • Developing or assisting in developing proposals for external and internal research grants and obtain financial support for new or continuing research activities. Prepare initial and final proposal and project budgets.

  • Participating in dissemination activities through the publications of research papers in peer-reviewed journals and in the ETS Research Report series, the issuing of progress and technical reports, the presentation of seminars at major conferences and at ETS, or the use of other appropriate communication vehicles, including patents, books and chapters, that impact practice in the field or at ETS.

Depending on experience this position is open to entry level candidates as well as mid-level and senior level professionals.

REQUIREMENTS FOR A JUNIOR LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology or a related field is required. One year of research experience is required, in education is desirable. Experience can be gained through doctoral studies. Candidates should be very skilled in programming and be able to work effectively as a research team member.

 

 

 

REQUIREMENTS FOR A MID-LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology, or a related field is required. Research experience in education is desirable. Candidates should be very skilled in programming and be able to work effectively as a research team member. Three years of progressively independent substantive research in the area of computer science, linguistics, cognitive psychology, or education are required.

REQUIREMENTS FOR A SENIOR-LEVEL POSITION

  • A Doctorate in computer science, linguistics, cognitive psychology, or a related field is required. Research experience in education is desirable. Candidates should be very skilled in programming and be able to work effectively as a research team member. Eight years of progressively independent substantive research in the area of computer science, linguistics, cognitive psychology, or education are required.

We offer a competitive salary, comprehensive benefits and excellent opportunities for professional and personal growth. For a full list of position responsibilities and to apply please visit the following link: http://ets.pereless.com/careers/index.cfm?fuseaction=83080.viewjobdetail&CID=83080&JID=235623&BUID=2538

ETS is an Equal Opportunity Employer

Top

6-14(2017-05-10) Research Scientist, Disney Research, Pittsburgh, PA, USA

Position: Research Scientist

Focus Area: Autonomous Agents for Multimodal Character Interaction

Disney Research


Disney Research Pittsburgh is seeking applicants for a Research
Scientist position, at either the junior or senior level, in
Autonomous Agents. The research emphasis is on architecture to support
the integration of natural language with character-based reasoning and
behavior.


As part of The Walt Disney Company, Disney Research builds upon a rich
legacy of innovation and technology leadership in the entertainment
industry that continues to this day. Disney Research was launched in
2008 offering the best attributes of academia and industry with the
goal of driving value across the company through technological
innovation. Our research covers a broad range of exciting and
challenging applications that are experienced daily by millions of
people around the world.


Our staff interacts directly with all core business areas of The Walt
Disney Company including Theme Parks and Imagineering, Consumer
Products, our Live Action and Animation Studios, and Media Networks.
We publish our research and are actively engaged with the global
research community. Our researchers collaborate closely with
co-located academic institutions.


We are seeking applicants in the following areas:

·         Agent architectures for language-based character interaction.

·         AI and machine learning methods for autonomous,
semantically-rich character behavior

Duties:

·         Drive value for Disney through groundbreaking research and innovation

·         Lead a research group with post-doctoral researchers,
interns, and external collaborators

·         Publish results and patent inventions in multimodal interaction

·         Participate in conferences, workshops and academic-industrial events

·         Develop a strong network of business partners within the company

Required Qualifications:

·         Ph.D. in Computer Science or equivalent

·         Proven track record of developing autonomous, integrated
agents with real-time NL components.

·         Experience with both symbolic and statistical machine
learning methods as applied to modeling semantics, action, or behavior

·         Possess strong technical presentation skills and able to
clearly communicate with technical and non-technical audiences

Desired Qualifications:

·         Experience in interaction design for entertainment

·         Background in NLP (e.g., relationship extraction, word sense
disambiguation, narrative generation) desirable



To apply:

Please email careers@disneyresearch.com. Please use DRP-RS-NLP-2017 in
your subject line. If you're interested in the position or for any
further information, please contact Jill Lehman
(jill.lehman@disneyresearch.com).

Top

6-15(2017-05-23) Lead Speech Recognition Engineer, Cambridge, UK

Lead Speech Recognition Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics is a leader in automatic speech recognition (ASR). Using proprietary technology, we have built one of the most accurate ASR systems in the world, with a vision to power a voice-enabled economy. We are already working at a time when the global economy is actively adopting all types of speech-related technologies. In developing our technology we combine our years of experience, the latest developments in the field and our own focus on cutting-edge research to produce a world-class service.

In the office, we pride ourselves on a relaxed but productive environment whilst we stay in touch with the progress of others by attending both academic and commercial conferences and have fun together with regular outings (in the past we have been punting, go-karting, attended a cooking workshop and played bubble football...).

We are expanding rapidly and are seeking more people in the coming months to help us keep pushing the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.

The Opportunity

We are looking for a talented speech scientist to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is working on our core ASR capabilities to improve our speed and accuracy and develop novel features so that we can support all languages. Your work will feed into ‘Auto-Auto’, our ground-breaking framework to support the building of ASR models, and hence the delivery of every language pack published by the company. You will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a small team, you will need to be a team player who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. Bringing skills to the team is as important as a can-do attitude. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.

Key Responsibilities

  • Ensuring that our speech recognition meets or exceeds that published by others

  • Improving our core modelling (acoustic, pronunciation, language)

  • Leading the extension of our ML framework so that we can build any language



Experience

Essential

  • MSc, PhD or equivalent experience in the academic aspects of speech recognition

  • Several years practical experience in speech recognition, covering all aspects (acoustic, pronunciation and language modelling as well as decoders/search)

  • Experience working with standard speech and ML toolkits, e.g. Kaldi, KenLM, TensorFlow, etc.

  • Solid programming skills with Python and / or C/C++

  • Experience using Unix/Linux for big data

Desirable

  • PhD degree

  • Experience of team leadership and line management

  • Experience of working in an Agile framework

  • Expertise in modern speech recognition, including WFSTs, lattice processing, neural net (RNN / DNN / LSTM), acoustic and language models, Viterbi decoding

  • Comprehensive knowledge of machine learning and statistical modelling

  • Experience in deep machine learning and related toolkits, e.g. Theano, Torch, etc.

  • Deep expertise in Python and/or C++ software development

  • Experience working effectively with software engineering teams or as a Software Engineer

Salary

We offer a competitive salary, bonus scheme, pension contribution matching (up to 5%) and a generous EMI share option scheme. We also have several additional benefits including holiday purchase, massages, fully stocked beer fridge, Cyclescheme, fruit boxes and many more.

The overall package will depend on your motivations and level of experience.

Top

6-16(2017-05-23) Software Development Engineer, Cambridge, UK

Software Development Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics is a leader in automatic speech recognition (ASR). Using proprietary technology, we have built one of the most accurate ASR systems in the world, with a vision to power a voice-enabled economy. We are already working in the world at a time when the global economy is actively adopting all types of speech-related technologies. In developing our technology we combine our years of experience, the latest developments in the field and our own focus on cutting-edge research to produce a world-class service.

In the office, we pride ourselves on a relaxed but productive environment whilst we stay in touch with the field by attending both academic and commercial conferences and have fun together with regular team events (in the past we have been punting, go-karting, attended a cooking workshop and played bubble football...).

We are expanding rapidly and are seeking more people in the coming months to help us keep pushing the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.

The Opportunity

You will be joining the ‘Languages’ team within Speechmatics, focussing on two key goals. We maintain and develop Auto-Auto, our ground-breaking framework to support the building of languages for use in ASR. And we use it to build new language models.

We are looking for an experienced Software Development Engineer to join us. As a member of the team, you will be working on the development, maintenance and expansion of our pipeline, and participating in building and solving the challenges of a growing language portfolio. You will have significant influence on implementing or integrating new features, drive the system architecture, and spearhead the best practices that enable a quality product.

Auto-Auto is core to our business and by working on it you will have a chance to build something that will be used in businesses and homes worldwide. Working in a rapidly growing start-up also means opportunities to contribute to other projects, depending on the candidate’s background and skills.

If you are a talented, detail-oriented engineer with a solid software development foundation and a commitment to deliver the best possible technology solutions, then we want to hear from you!

Key Responsibilities

  • Delivering high quality, maintainable and robust code on time, as part of a team.

  • Executing projects and developing against an outlined design.

  • Developing pragmatic solutions and building flexible systems without over-engineering.

  • Involvement at all stages of the software development cycle, including designing and developing new architectural systems and improvements, and QA processes.

  • Participation in estimation and sprint planning in an agile environment.

  • Participation in delivering new language models for the ASR engine.

  • Working closely with other technical teams and product team to deliver on the company’s technical vision.

Experience

Essential

  • Bachelor's Degree in Computer Science or related field.

  • Professional experience in software development.

  • Computer Science fundamentals in object-oriented design, data structures, algorithm design, problem solving, and complexity analysis.

  • Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.

  • Excellent Python skills.

  • Good Linux skills.

  • Experience of working within a team to deliver and run high quality systems.

Desirable

  • Master's degree in Computer Science or related field.

  • Demonstrable professional experience in software development.

  • Proficiency in C and C++ (ideally with strong STL and Boost experience).

  • Strong skills and experience in cloud-based software development, preferably AWS:

    • Working with distributed and/or clustered systems.

    • Building and running horizontally scaling architectures.

    • Using cloud-based queueing, messaging, monitoring and storage techniques.

  • Experience in flow-based programming.

  • Familiarity with statistical models and data mining algorithms.

  • Analytical with a data-driven approach to making decisions and attention to detail.

  • Previous experience with Natural Language Processing techniques.

  • Comfortable collaborating with teams with very different technical skills, and non-technical teams.

Salary

We offer a competitive salary, bonus scheme, pension contribution matching (up to 5%) and a generous EMI share option scheme. We also have several additional benefits including holiday purchase, massages, fully stocked beer fridge, Cyclescheme, fruit boxes and many more.

Top

6-17(2017-05-29) PhD & Post-Doc Research positions in Speech Signal Processing and Electronic Design, Autonomous University of Zacatecas, Zacatecas, Mexico

PhD & Post-Doc Research positions in Speech Signal Processing and Electronic Design


Place:        Autonomous University of Zacatecas, Zacatecas, Mexico

Duration:  PhD (3 years) / Post-Doc (1 year)

Start:          PhD / Post-Doc (January 10th, 2018)

Benefits: - Economical support according to experience - Health insurance from the Mexican Social Security Institute - Round-trip international airfare at the beginning and end.


Position description: Department of Signal Processing and Acoustics, Autonomous University of Zacatecas, is looking for candidates for: fully-funded PhD and Post-doc positions in Signal Processing,  Filtering  Design, Embedded Systems, Speech Recognition and Synthesis. The signal processing group (led by  Dr. Hamurabi GamboaRosales) at Autonomous University of Zacatecas works on algorithm designing, signal processing, electronic design, machine learning, probabilistic modeling in speech recognition and synthesis. The group belongs also to the National Laboratory in embedded systems, advanced electronic design and Microsystems. We are looking for outstanding candidates to join our research group as PhD students and Post-doc researchers to work on any of our research themes, for example: • Digital Signal processing • Optimal Filtering • FPGA’s • Microsystems • large-vocabulary speech recognition and text- to-speech synthesis • ASR in noisy environments


Candidate Profile for PhD / Post-Doc:    The candidate will have:  Master’s / PhD degree, as required by the program for which is requested, in digital signal processing, electronic design, speech signal processing, acoustics, machine learning, computer science, electrical engineering, psychology or a related discipline.  Background in signal processing or electronic design (FPGAs).  Good programming skills in Java, C/C++, Python or Matlab.


Contact: Interested applicants can contact PhD Hamurabi Gamboa-Rosales for more information or directly email a candidacy letter including Curriculum Vitae, a list of publications and a statement of research interests. Email: hamurabigr@uaz.edu.mx  ; hamurabigr@hotmail.com  Telephone MX: +52 (1) 492-121-6787

Top

6-18(2017-06-01) Appel à chercheurs 2017-2018 à l'INA Paris France

Appel à chercheurs 2017-2018

Nouveaux dispositifs de soutien à la recherche à l’Ina :

Chercheurs associés et bourses de recherche


Afin d’encourager le développement de travaux scientifiques menés à partir de ses fonds et des outils d’analyse qu’il développe, l'Ina a décidé de créer en 2017 deux nouveaux dispositifs de soutien à la recherche et à la valorisation scientifique de ses collections :

  • l’octroi d’un statut de chercheur associé à l’Ina
  • l’attribution de bourses de recherche

 

Par ces dispositifs, l’Ina entend accompagner des doctorants et des chercheurs dans la réalisation de projets de recherche originaux et innovants portant sur (ou faisant appel à) ses collections, ou portant sur l’analyse ou le traitement des images et/ou des sons et/ou de données associées.

 

L’Institut offre aux chercheurs sélectionnés un accueil privilégié, assorti de divers soutiens matériels.

Ces nouveaux dispositifs sont complémentaires des prix de l’Inathèque créés en 1997, et ajoutent un nouveau volet à la politique scientifique de l’Institut.

 

Le règlement de l’appel est disponible sur le site de l’Inathèque : http://www.inatheque.fr/actualites/2017/mai-2017/appel-chercheurs-2017-2018.html

Top

6-19(2017-06-023) Vacataires à la police technique et scientifique, Ecully (Lyon), France

Le service audio de la Police Technique et Scientifique (Ecully, près de Lyon, France) recherche des vacataires pour effectuer un travail de segmentation et de correction d'alignement automatique dans le cadre d'études phonétiques.
Le profil suivant est recherché:

- un intérêt pour la linguistique ou pour les langues
- une bonne maitrise de l'informatique et des nouvelles technologiques
- une connaissance du logiciel Praat sera appréciée
Les vacations peuvent commencer dès que possible et peuvent se poursuivre jusqu'en octobre. 
Pour plus d'informations, merci d'envoyer un mail à l'adresse suivante ptsvox@gmail.com, avec vos coordonnées.
Top

6-20(2017-06-06) Post-doctoral Research Associate in Advanced Deep Neural network Architectures for ASR , Univ. of Crete, Greece

Department of Computer Science, University of Crete, Greece
Post-doctoral Research Associate in Advanced Deep Neural network Architectures for ASR  (Fixed Term)
 
SALARY: €24000-€28000 per year
CLOSING DATE: 30 June 2017
REFERENCE: ASR1
TO APPLY: Send detailed CV, a motivation letter and 3 major publications to yannis@csd.uoc.gr
 
In the past few years, Deep Neural Networks (DNNs) have achieved tremendous success for many supervised machine learning tasks, including acoustic modelling for Automatic Speech Recognition (ASR). Advanced models such as Convolutional Neural Networks (CNNs) and Long Short Term Recurrent Neural Networks (LSTMs) have contributed to recent empirical breakthroughs. Network depth has played perhaps the most important role in these successes. However, increased depth represents challenges in the optimization of the network and despite the efforts to overcome these challenges some of the optimization issues are still important resistant. Advanced networks such as highway networks and (wide) residual networks seems to offer solutions to these issues.
This position represents an ideal opportunity to work in or move into advanced deep neural networks, as it will involve collaborating widely across academia and industry, and working on one of the most pressing research areas of machine learning for the development of robust ASR systems.
Based in Heraklion Crete the post will be with Prof. Yannis Stylianou and Dr. Vassilis Tsiaras as part of the speech processing group within the Department of Computer Science at the University of Crete. You will explore a rich set of network architectures and thoroughly examine how several different aspects affect the accuracy of ASR. The work will be performed within the framework of advanced deep neural network architectures for various signal processing tasks including 1D and 2D signals. The focus of the post will be to perform various experiments with well-known architectures, explore and suggest modifications, process and reshape knowledge from various signal processing/classification tasks towards speech processing for the purpose of ASR. Outcomes will directly feed into improvements of ASR systems in-house working with state-of-the art ASR tasks (i.e., CHiME4, REVERB, etc) and of our industrial partners using real-life data.
The post involves travel to international conferences and project meetings with our academic and industrial partners. There will be the possibility to co-advise doctoral students and potentially other teaching opportunities.
Applicants should have a doctorate in speech signal processing area for ASR, computer science, applied mathematics or related field and ideally a strong background in deep learning and mathematics. Knowledge of deep learning systems such as Tensorflow or Theano etc and ASR systems like Kaldi are an advantage. Proficiency in computer programming in C and/or Python are expected.
Informal inquiries should be directed to Prof. Yannis Stylianou by email, yannis@csd.uoc.gr
Fixed term: In the first instance, the funding supporting the post is for two years. We are expecting project extension which will provide funding for a further 7-12 months for this post.
Interviews are expected to take place the week commencing 10th July 2017. Expected start date: September 2017, however earlier and later start dates will be considered.
 
To apply, please send detailed CV, a motivation letter and 3 major publications of yours to: yannis@csd.uoc.gr (Prof. Yannis Stylianou)

Top

6-21(2017-06-06) Post-doctoral Research Associate in Data Augmentation in the context of Deep Neural network ASR, Univ.of Crete, Greece

 

Department of Computer Science, University of Crete, Greece

Post-doctoral Research Associate in Data Augmentation in the context of Deep Neural network ASR

(Fixed Term)

SALARY: €24000-€28000 per year

CLOSING DATE: 30 June 2017

REFERENCE: ASR2

TO APPLY: Send detailed CV, a motivation letter and 3 major publications to yannis@csd.uoc.gr

In the past few years, Deep Neural Networks (DNNs) have achieved tremendous success for many supervised machine learning tasks, including acoustic modelling for Automatic Speech Recognition (ASR). Advanced models such as Convolutional Neural Networks (CNNs) and Long Short Term Recurrent Neural Networks (LSTMs) have contributed to recent empirical breakthroughs. However, deep learning methods are quite demanding in the amount of data for training an acoustic model for ASR and as a result significant amounts of transcribed data has become available for training use. But data transcription is a quite expensive and time consuming process. On the other hand, just adding data recorded in real-world conditions puts serious constraints on the efficient training of the acoustic models. Various works on data augmentation show that word error rate (WER) can be significantly reduced if proper augmented data are processed.

This position represents an ideal opportunity to work in or move into data augmentation research area in the context of advanced deep neural networks for ASR, as it will involve collaborating widely across academia and industry, and working on one of the most pressing research areas of machine learning for the development of robust ASR systems.

Based in Heraklion Crete the post will be with Prof. Yannis Stylianou and Dr. George Kafentzis as part of the speech processing group within the Department of Computer Science at the University of Crete. You will design and develop smart approaches for spoken data augmentation for the purpose of multi-condition training of deep learning-based ASR systems. The work will be performed within the framework of advanced deep neural network architectures for various ASR tasks. The focus of the post will be to perform various experiments with spoken data generation, explore and suggest modifications, process and reshape knowledge from various signal processing for the purpose of ASR. Outcomes will directly feed into improvements of ASR systems in-house working with state-of-the art ASR tasks (i.e., AURORA-4, CHiME4, REVERB, etc) and of our industrial partners using real-life data.

The post involves travel to international conferences and project meetings with our academic and industrial partners. There will be the possibility to co-advise doctoral students and potentially other teaching opportunities.

Applicants should have a doctorate in speech signal processing area for ASR, statistical speech synthesis and voice conversion, audio signal processing, computer science, applied mathematics or related field and ideally a strong background in deep learning and mathematics. Knowledge of deep

learning systems such as Tensorflow or Theano etc and ASR systems like Kaldi are an advantage. Proficiency in computer programming in C and/or Python are expected.

Informal inquiries should be directed to Prof. Yannis Stylianou by email, yannis@csd.uoc.gr

Fixed term: In the first instance, the funding supporting the post is for two years. We are expecting project extension which will provide funding for a further 7-12 months for this post.

Interviews are expected to take place the week commencing 10th July 2017.

Expected start date: September 2017, however earlier and later start dates will be considered.

To apply, please send detailed CV, a motivation letter and 3 major publications of yours to: yannis@csd.uoc.gr (Prof. Yannis Stylianou)

Top

6-22(2017-06-18) 2 (W/M) researcher positions at IRCAM, Paris, France

Position: 2 (W/M) researcher positions at IRCAM

Starting: September 1st, 2017

Duration: 18 months

Deadline for application: July, 1st, 2017

Offre d’emploi: 2 poste de chercheur (H/F) à l’IRCAM

Démarrage: 1er Septembre 2017

Durée: 18 mois

Date limite pour candidature: 1er Juillet 2017

IRCAM is proposing 2 researcher positions related to the European

H2020 Project ABC_Dj and Future Pulse.

The goal of the first position is to develop robust algorithms for

automatic melody extraction (AME) and use them to characterize the

dominant melodic profiles.

The goal of the second position is to develop robust machine

learning algorithms for the automatic recognition of music style/mood

in live performances.

Candidates should have

High Skill in audio signal processing (spectral analysis, audiofeature

extraction, parameter estimation) (the candidate

should preferably hold a PHD in this field)

High skill in machine learning (the candidate should preferably

hold a PHD in this field)

High-skill in Matlab/Python programming, skills in C/C++

programming

Good knowledge of Linux, Windows, Mac-OS environments

High productivity, methodical works, excellent programming

style.

The hired Researchers will also collaborate with the development

team and participate in the project activities (evaluation of technologies, meetings,

specifications, reports)

 

Introduction to IRCAM:

IRCAM is a leading non-profit organization associated to Centre

Pompidou, dedicated to music production, R&D and education in

sound and music technologies. It hosts composers, researchers and

students from many countries cooperating in contemporary music

production, scientific and applied research. The main topics

addressed in its R&D department include acoustics, audio signal

processing, computer music, interaction technologies and

musicology. Ircam is located in the centre of Paris near the Centre

Pompidou, at 1, Place Igor Stravinsky 75004 Paris.

Salary:

According to background and experience

Applications:

Please send an application letter with the reference 201706RES

together with your resume and any suitable information addressing

the above issues preferably by email to: peeters at ircam dot fr with

cc to vinet at ircam dot fr, roebel at Ircam dot fr.

 

Top

6-23(2017-06-18) Proposition de thèse CIFRE en Informatique, traitement automatique du langage naturel

Proposition de thèse CIFRE en Informatique, traitement automatique du langage naturel

Société Calystene et Laboratoire d'Informatique de Grenoble

Calystene propose une thèse CIFRE dans le cadre d'une collaboration avec l'équipe GETALP du laboratoire d'Informatique de Grenoble.

Début de la thèse : Dès que possible.
Localisation : le poste est basé à Grenoble - Eybens
Le salaire est de 30 000 keuros brut annuel environ.

Description du sujet :

Avec l'évolution des systèmes d'information des établissements de santé, les praticiens sont incités à saisir de plus en plus d'informations de manière numérique à travers différents logiciels spécialisés. La prise en main de ces logiciels implique toujours une étape de formation et d?adaptation qui est problématique pour certains clients de Calystene. Ainsi, Calystene souhaite rendre naturelle la saisie de prescription médicamenteuse sur son logiciel FUTURA SMART DESIGN® pour les praticiens, notamment ceux extérieurs aux établissements de santé. En effet, ces intervenants, précieux mais ponctuels, n'ont pas le temps de se former aux interfaces des logiciels et n'ont pas d'accès immédiat à un poste de travail (cas notamment des médecins de ville qui interviennent en maison de retraite). Des prescriptions sont donc régulièrement rédigées sur papier et n'entrent pas dans le système d'information, ce qui est dommageable pour la gestion des soins.

Pour répondre à ce problème, ce doctorat vise à définir des modèles d'interaction en langage naturel et leur mise en ?uvre dans la plateforme FUTURA SMART DESIGN® de Calystene, afin de proposer une saisie naturelle d'informations par écrit et/ou orale de prescriptions médicales sur un terminal mobile et pour la résolution intuitive et intelligente d'erreurs ou d'information manquante. Le prototype, baptisé « Intelligent Prescription Completion » permettra de se rapprocher le plus possible du langage de prescription des médecins. Les lignes d?ordonnance seront saisies soit par dictée vocale, reconnaissance d?écriture sur tablette ou saisie classique via le clavier. L?ordonnance sera saisie en langage quasi-naturel (le langage des prescriptions médicales) et la structuration et validation des données (nécessaire aux contrôles et à la gestion) sera réalisée en temps réel par un algorithme développé par Calystene.

Pour atteindre ces objectifs, la solution s?appuiera sur une approche de type dialogue [1] où la sémantique de l'information est d'abord extraite de l?énoncé [2] --- qu'il soit écrit ou acquis à partir de la parole [3] --- puis analysée par la logique métier afin de valider, suggérer des modifications ou provoquer des demandes de complément d?information à la manière d'un chatbot. La thématique du doctorat se situera donc dans le domaine de la compréhension automatique du langage naturel et de la parole et du raisonnement automatique avec un fort aspect prototypage et validation. Le/la candidat(e) sera encouragé(e) à publier ses progrès dans les grandes conférences du domaine en TALN (ACL,Interspeech) et en IA appliquée à la médecine (AIME, AMIA).

[1] S. Young, M. Ga?i?, S. Keizer, F. Mairesse, J. Schatzmann, B. Thomson, and K. Yu, ?The hidden information state model: A practical framework for pomdp-based spoken dialogue management,? Computer Speech & Language, vol. 24, no. 2, pp. 150?174, 2010
[2] Xu H, Stenner SP, Doan S, Johnson KB, Waitman LR, Denny JC. MedEx: a medication information extraction system for clinical narratives. Journal of the American Medical Informatics Association?: JAMIA. 2010;17(1):19-24.
[3] M. Vacher, S. Caffiau, F. Portet, B. Meillon, C. Roux, E. Elias, B. Lecouteux, P. Chahuara. Evaluation of a context-aware voice interface for Ambient Assisted Living: qualitative user study vs. quantitative system evaluation ACM - Transactions on Speech and Language Processing, Association for Computing Machinery, 2015, Special Issue on Speech and Language Processing for AT (Part 3), 7 (issue 2), pp.5:1-5:36.

Profil du candidat recherché :
Possédant ou terminant un Master 2 Recherche en informatique ou en TALN, vous souhaitez préparer un doctorat CIFRE en entreprise en liaison avec un laboratoire de recherche. Vous êtes passionné par les technologies de l?information et de la communication, vous avez de bonnes connaissances en développement Android/iPhone. Vous avez une formation et une expérience dans l'étude et/ou le développement de traitement automatique du langage naturel. Des connaissances en apprentissage automatique et en acquisition de corpus seraient un plus.

Merci d'envoyer un CV + une lettre de motivation + lettres de recommandation à Jean-Marc BABOUCHKINE (jm.babouchkine@calystene.com) et François Portet (francois.portet@imag.fr)

Calystene : http://www.calystene.com/
Getalp : http://lig-getalp.imag.fr/

Top

6-24(2017-06-19) PhD position in Computational Linguistics for Ambient Intelligence, Grenoble, France

Keywords: Natural language understanding, decision support system, smart home

The Laboratoire d'Informatique de Grenoble (LIG) of the University Grenoble Alpes, Grenoble, France invites applications for a PhD position in Computational Linguistics for Ambient Intelligence.

University of Grenoble Alpes is situated in a high-tech city located at the heart of the Alps, in outstanding scientific and natural surroundings. It is 3h by train from Paris ; 2h from Geneva and is less than 1h from Lyon international airport.

The position starts in September 2017 and ends in July 2020 and is proposed in the context of the national project Vocadom (http://vocadom.imag.fr/) whose aim is to build technologies that make natural hand-free speech interaction with a home automation system possible from anywhere in the home even in adverse conditions [Vacher2015].

The aim of the PhD will be to build a new generation of situated spoken human machine interaction where uttered sentences by a human are understood within the context of the interaction in the home. The targeted application is a distant speech hand free and ubiquitous voice user interface to make the home automation system react to voice commands [Chahuara2017]. The system should be able to process possibly erroneous outputs from an ASR system (Automatic Speech Recognition) to extract meaning related to a voice command and to decide about which command to execute or to send a relevant feed-back to the user. The challenge will be to constantly adapt the system to new lexical phrases (no a priori grammar), new situations (e.g., unseen user, context) and change in the house (e.g., new device, device out of order). In this work, we propose to extend classical S/NLU (Natural Language Understanding) approaches by including non-linguistic contextual information in the NLU process to tackle the ambiguity and borrow zero-shot learning techniques [Ferreira2015] to extend the lexical space on-line. Reinforcement learning is targeted to adapt the models to the user(s) all along the use of the system [Mnih2015].
The candidate will be strongly encouraged to publish their progress to the main events of the field (ACL, Interspeech, Ubicomp).
The PhD candidate will also be involved in experiments including real smart-home and real users (elderly people and people with visual impairment) [Vacher2015].


REFERENCES :
[Mnih2015] Mnih, Kavukcuoglu et al.  Human-level control through deep reinforcement learning. Nature 518,529?533
[Chahuara2017] Chahuara, F. Portet, M. Vacher Context-aware decision making under uncertainty for voice-based control of smart home Expert Systems with Applications, Elsevier, 2017, 75, pp.63-79.
[Ferreira2015] E Ferreira, B Jabaian, F Lefevre Online adaptative zero-shot learning spoken language understanding using word-embedding Acoustics, Speech and Signal Processing (ICASSP), 2015
[Vacher2015] M. Vacher, S. Caffiau, F. Portet, B. Meillon, C. Roux, E. Elias, B. Lecouteux, P. Chahuara. Evaluation of a context-aware voice interface for Ambient Assisted Living: qualitative user study vs. quantitative system evaluation. ACM - Transactions on Speech and Language Processing, Association for Computing Machinery, 2015, pp.5:1-5:36.

JOB REQUIREMENTS AND QUALIFICATIONS

- Master?s degree in Computational Linguistics or Artificial Intelligence (Computer Science can also be considered)
- Solid programming skills,
- Good background in machine learning,
- Excellent English communication and writing skills,
- Good command of French (mandatory),
- Experience in experimentation involving human participants would be a plus
- Experience in dialogue systems would be a plus plus

Applications should include:

- Cover letter outlining interest in the position
- Names of two referees
- Curriculum Vitae (CV) (with publications if applicable)
- Copy of the university marks (grade list)

and be sent to michel.vacher@imag.fr and francois.portet@imag.fr



Research Group Website : http://getalp.imag.fr
Research project website : http://vocadom.imag.fr/
Top

6-25(2017-06-20) Full-time post-Doctoral researcher position at LORIA Nancy, France

Loria a computer science lab in Nancy - France, has 12 months funded full-time post-Doctoral researcher position starting on October 2017.
The post-doctoral position is funded by AMIS (Access Multilingual Information OpinionS), a Chist-Era project (http://deustotechlife.deusto.es/amis/).

The topic of the post-doc is the automatic comparison of multilingual opinions in videos. Two videos in two different languages concerning the
same topic have to be compared. One of the videos is summarized and translated to the language of the second one. This last one is summarized
and then the opinions of the two original videos are compared in terms of emotional labels such as: anger, disgust, fear, joy, sadness, surprise...
They should be compared also in terms of basic sentiments.

 

Social network will be used in order to reinforce the analysis of the contents in terms of opinions and sentiments.

 

AMIS group will make available the summary of videos in terms of text. The candidate will work on NLP, but skills in video analysis will
be appreciated.

 

The applicant will contribute also to other tasks in collaboration with other partners of AMIS project.

 

The successful candidate will join the SMarT research team, he will be supervised by Prof. Kamel Smaïli, Dr D. Langlois and Dr D. Jouvet.
The applicant will work also with Dr O. Mella and Dr D. Fohr.

 

Location: Loria - Nancy (France) 
Duration: October 2017 ? September 2018 
Net salary: from 1800 Euros to 2400 Euros per month.

The ideal applicant should have: 

 

* A PhD in NLP, opinion and sentiment mining or other strongly related discipline.
* A very solid background in statistical machine learning. 
* Strong publications. 
* Solid programming skills to conduct experiments. 
* Excellent level in English. 

Applicants should send to smaili@loria.fr: 

* A CV
* A cover letter outlining the motivation
* Three most representative papers 

 

 

 

Delete | Reply | Reply to List | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
Move | Copy
Top

6-26(2017-06-20) Thèse CIFRE at Orange Labs, Lannion, France
Orange Labs propose une thèse CIFRE en Informatique en lien avec les domaines suivants : Apprentissage automatique, Prédiction structurée et Traitement automatique du langage naturel.
 
Cette thèse se place dans le cadre d'une collaboration avec l?équipe Expression de l?IRISA à Lannion. 
 
La description du sujet est disponible à cet emplacement :
 
 
Profil des candidats : Les candidat(e)s doivent être titulaire d'un Master recherche en informatique, statistique, traitement du signal. Les formations mixtes mathématique/informatique sont privilégiées. 
Excellent niveau en anglais requis.
Date limite de candidature : 31 juil. 2017
Localisation : Lannion
Top

6-27(2017-06-25) Two positions are available for internships at FBK, Trento, Italy
Two positions are available for internships at FBK, Trento, Italy

Title: Deep machine learning for speaker diarization
Duration: Jan 1 - Oct 31, 2018
Url:https://hr.fbk.eu/en/jobs


Title: DNN adaptation for acoustic modeling in speech recognition
Duration: Jan 1 - Oct 31, 2018
Url:https://hr.fbk.eu/en/jobs
 
Application deadline 15th of september 2017
---------
 
Top

6-28(2017-06-26) Language Resources Project Manager -Junior position at ELDA Paris France

The European Language resources Distribution Agency (ELDA), a company specialized in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a Language Resources Project Manager ? Junior position. This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Engineering field.

Language Resources Project Manager - Junior (m/f)

Under the supervision of the Language Resources Sales Manager, the Language Resources Project Manager ? Junior will be in charge of the identification of Language Resources (LRs) and the negotiation of rights in relation with their distribution.

The position includes, but is not limited to, the responsibility of the following tasks:

  • Identification of LRs and Cataloguing.
  • Negotiation of distribution rights, including interaction with LR providers, drafting of distribution agreements, definition of prices of language resources to be integrated in the ELRA catalogue.
  • LR Packaging and Archiving.
  • Designing and evaluating workflows for IPR clearance in the digital environment.

Profile:

  • Master?s degree or equivalent in Law and Computer science, with an awareness of Intellectual Property Rights and Data Protection issues in the digital environment.
  • The ideal candidate will have experience in computational linguistics, information science, knowledge management or similar fields.
  • Experience in project management and participation in European projects, as well as practice in contract and partnership negotiation at an international level, would be a plus.
  • Dynamic and communicative, flexible to combine and work on different tasks.
  • Ability to work independently and as part of a team.
  • Proficiency in English, with strong writing and documentation skills. Communication skills required in a French-speaking working environment.
  • Citizenship of (or residency papers) a European Union country.

All positions are based in Paris. Applications will be considered until the position is filled.

Salary is commensurate with qualifications and experience.
Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to:

ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Fax : 01 43 13 33 30
Mail: job@elda.org

ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends.

For further information about ELDA and ELRA, visit:
http://www.elra.info

Top

6-29(2017-06-27) Thèse de doctorat en traitement de la parole Parkinsonienne à INRIA Bordeaux, France

nceThèse de doctorat en traitement de la parole Parkinsonienne à INRIA Bordeaux

Sujet : Traitement non-linéaire de la parole pour l'analyse et la classification de voix

Parkinsoniennes

Contexte scientifique :

La maladie de Parkinson (MP) est la maladie neurodégénerative la plus répandue après la maladie

d'Alzheimer. Elle touche 1.5% de la population âgée de plus de 65 ans et 143000 français. L'atrophie

multi-systématisée (AMS) est une maladie neurodégénérative rare et sporadique d'évolution

progressive et d’étiologie inconnue. Elle a une prévalence de 2 à 5/100000 et n'a pas de traitement

effectif. AMS appartient au groupe des troubles parkinsoniens atypiques et est responsable d’un

pronostic péjoratif. Dans les premiers stades de la maladie, les symptômes de MP et AMS sont très

similaires, surtout pour AMS-P où le syndrome parkinsonien prédomine. Le diagnostic différentiel

entre AMP-P et MP peut être très difficile dans les stades précoces de la maladie, tandis que la certitude

de diagnostic précoce est important pour le patient en raison du pronostic divergent. En effet, malgré

des efforts récents, aucun marqueur objectif valide n'est actuellement disponible pour guider le

clinicien dans ce diagnostic différentiel. La nécessité de ces marqueurs est donc très élevé dans la

communauté de la neurologie, en particulier compte tenu de la gravité du pronostic de AMS-P.

Les troubles de la parole et de la voix, communément appelés dysarthrie [1,2], dans la maladie de

Parkinson sont un marqueur clinique qui coïncide avec une déficience motrice et l'apparition d'une

déficience cognitive. Comme les patients MP, en fonction des zones du cerveau qui sont endommagées,

les personnes souffrant d'AMS peuvent également avoir des troubles de la parole: difficultés

d'articulation, rythme staccato, voix grinçante ou silencieuse. La dysarthrie dans l'AMS est plus sévère

et plus précoce dans le sens où elle nécessite plus de rééducation précoce par rapport à la MP.

Comme les troubles de la parole sont un symptôme précoce commun aux deux maladies et d'origine

différente. Notre approche consiste à utiliser la dysarthrie, grâce à un traitement numérique des

enregistrements vocaux des patients, comme un vecteur pour l'aide au diagnostic différentiel entre MP

et AMS-P dans les stades précoces de la maladie.

Objectif de la thèse :

Les voix pathologiques, telles que la MP et la MSA, présentent généralement une forte non-linéarité et

une turbulence élevée. Les phénomènes non-linéaires/turbulents ne sont pas naturellement bien décrits

par le traitement linéaire du signal. Ce dernier règle cependant actuellement sur la technologie de la

parole. Ainsi, du point de vue méthodologique, l'objectif de cette thèse est d'étudier la parole

Parkinsonienne dans le cadre des signaux et systèmes non linéaires et turbulents [3] . Ce cadre est en

effet mieux adapté à l'analyse de la gamme des phénomènes non linéaires et turbulents observés dans

les voix pathologiques en général, et dans la voix MP et MSA en particulier. Nous adopterons

notamment une approche basée sur de nouveaux algorithmes d'analyse non-linéaire de la parole

récemment développés dans l'équipe Gestation [4]. L'objectif est d'extraire les caractéristiques de la

parole pertinentes pour concevoir de nouvelles mesures de dysarthrie qui permettent une discrimination

précise entre les voix de MP et de MSA. Cela nécessitera également l'utilisation de méthodes

d'apprentissage statistique (Machine learning) afin de développer des classificateurs robustes (pour

discriminer les voix de MP et MSA) et d'établir des correspondances (régression) entre des mesures de

la parole et les scores cliniques standard quantifiant la sévérité de la maladie.

Les partenaires cliniques de ce projet sont des centres du CHU-Bordeaux et du CHU-Toulouse de

renommée internationale sur MP et AMS. Les partenaires académiques sont l'équipe Samova de l’IRIT,

qui a une grande expertise en traitement (linéaire) de la parole, et l'Institut Mathématique de Toulouse

(IMT) pour les aspects Machine learning.

Le doctorant participera ainsi activement à la collecte de données, en coordination avec les neurologues

et phoniatres du CHU-Bordeaux et CHU-Toulouse. Ces données consisteront en l'enregistrement des

voix des patients à l'aide d'un enregistreur numérique et du dispositif EVA2 (http://www.sqlab.fr/), ainsi

que de signaux électroglottographiques (EGG).

Références:

[1] Freed, D. Motor speech disorders. Thomson Learning Eds. 2000.

[2] Auzou, P.; Rolland, V.; Pinto, S., Ozsancak C. (eds.). Les dysarthries. Editions Solal. 2007.

[3] Kantz, H. and T. Schreiber, Nonlinear time series analysis. 2nd ed. 2004, Cambridge; New York: Cambridge

University Press.

[4] PhD thesis of Vahid Khanagha. GeoStat team, INRIA Bordeaux-Sud Ouest. January 2013.

http://geostat.bordeaux.inria.fr/images/vahid%20khanagha%204737.pdf

Directeur : Dr. Khalid Daoudi, équipe Gestat (khalid.daoudi@inria.fr).

Lieu : INRIA- Bordeaux Sud Ouest (http://www.inria.fr/bordeaux). Bordeaux, France.

Financement : Projet ANR (Voice4PD-MSA)

Début de la thèse : entre Le 1er Octobre et le 31 décembre 2017

Durée : 3 ans

Rémunération : ~1600€ net/mois, incluant la couverture sociale et médicale.

Compétences requises : De très bonnes connaissances en traitement de la parole/signal ainsi qu'en

programmation C++/Python et Matlab sont nécessaires. Des connaissances en apprentissage statistique

(Machine learning) seraient un grand plus.

Les candidatures doivent être adressées à khalid.daoudi@inria.fr

Top

6-30(2017-07-12) Open PhD and postdoc positions at LIMSI - CNRS, Orsay, France
Open PhD and postdoc positions at LIMSI - CNRS, Orsay, France
Automatic enrichment of TV series and movies transcripts
Keywords : natural language processing, speech processing, machine learning, deep learning

The goal of this project is to fully exploit the audio stream to automatically enrich speech transcripts and subtitles of TV series and movies with the name and position of the characters.

speaker A  ? 'Nice to meet you, I am Leonard, and this is Sheldon. We live across the hall.'
speaker B ? 'Oh. Hi. I?m Penny.'

speaker A ? 'Sheldon, what the hell are you doing?'
speaker C ? I am not quite sure yet. Do you know where Howard lives?

Just looking at these two short conversations, a human can easily infer that 'speaker A' is actually 'Leonard', 'speaker B' is Penny and 'speaker C' is Sheldon. The objective of this project is to combine natural language processing, speech processing, and computer vision to do the same automatically.
 
Top

6-31(2017-07-25) Post-doc à Paris et Saclay , France

eurosPost-doc à Paris et Saclay : Apprentissage semi-supervisé et apprentissage profond pour une mesure de l'expérience émotionnelle du client dans les interactions orales


[1] Telecom ParisTech, 46 rue Barrault, 75013 Paris

[2] EDF Lab Paris Saclay, 7 boulevard Gaspard Monge 91120 Palaiseau

Durée :  1 an

Début :  Novembre 2017

Salaire : en fonction de l'expérience et à partir de 2300 euros/mois

       

*Description du sujet de post-doctorat*

 

Avec l'engouement récent du Big Data pour le Feel data, EDF souhaite développer des méthodes d'analyse automatique de l'expérience émotionnelle du client dans ses interactions orales. Les interactions orales considérées sont les données des centres d'appels et données d'interaction avec des assistants virtuels vocaux (de type Alexa, Siri). L'objectif du post-doctorat est de mettre en place desméthodes d'apprentissage semi-supervisées, combinées à de l'apprentissage profond (deep learning) pour l'analyse automatique de l'expérience émotionnelle du client. Les méthodes reposeront sur des caractéristiques (features) linguistiques et acoustiques extraites des interactions orales.

 

Le chercheur post-doctorant prendra part à une collaboration entre EDF Lab et Telecom-ParisTech et son travail de recherche reposera sur les étapes suivantes :

- la mise en place de schéma d?annotation de l?expérience émotionnelle de l?utilisateur sur les données collectées à EDF Lab (données des centres d'appels et données d'interaction avec des assistants virtuels vocaux (de type Alexa, Siri)

- l?extraction d?indices acoustiques caractéristiques des réalisations non verbales de l?expérience émotionnelle de l?utilisateur

- l?extraction d?indices linguistiques caractéristiques des réalisations verbales de l?expérience émotionnelle de l?utilisateur

-  la mise en place de stratégies d?apprentissage permettant de mixer des approches non supervisées et des approches d?apprentissage profond

Le post-doc aura lieu au sein des deux centres de recherche : EDFLab [2] et Telecom-Paristech (département IDS image and signal processing department of Telecom ParisTech [1], équipe S2a, thème Social computing).


* Profil du candidat*

 

Le candidat devra posséder a minima les compétences suivantes :

 

?    un diplôme de doctorat

?   des recherches et publications dans au moins l?une des thématiques suivantes : machine learning, traitement de la parole, affective computing. Traitement automatique des langues

?    d?excellentes compétences en programmation  (avec une préférence pour le python)

?    excellente maîtrise du français et bonne maîtrise de l?anglais

 

Les compétences ci-dessous, seront également un atout :

?        apprentissage semi-supervisé

?        Deep learning

?        analyse acoustique des émotions

 

-- Informations pratiques

Lieu du post-doc: Paris (Telecom-ParisTech) et Palaiseau (EDF Lab Paris-Saclay)

Encadrement : Delphine Lagarde, Aurélie Dano (EDF Lab) et Chloé Clavel (Telecom-ParisTech)

 

-- Comment candidater :

Les candidatures sont à envoyer à Chloé Clavel (chloe.clavel@telecom-paristech.fr)

 

Elles doivent être compilées ** en un seul fichier pdf ** et inclure :

?    Un CV complet et détaillé incluant la liste des publications

?    Une lettre de motivation spécifique au sujet du post-doc

?   Les noms et les adresses de 2 référents.

 

[1]http://www.tsi.telecom-paristech.fr

[2]htttps://www.edf.fr/groupe-edf/qui-sommes-nous/activites/recherche-et-developpement/nos-centres-de-recherche/le-centre-de-recherche-edf-lab-paris-saclay-souffle-sa-premiere-bougie  

Top

6-32(2017-07-26) DSP engineer positions, Proactivaudio, Vienna, Austria
Several  positions related to DSP engineering for speech and audio applications are open at Proactivsaudio  in Vienna,Austria. More details can be found here http://www.proactivaudio.pro/?page_id=2136.


1.Software Engineer (f/m) – Digital Audio 
 
Location Vienna, Austria.

Description: We are looking for a full-time software engineer with experience and passion for software development of audio systems. The successful candidate has the unique opportunity to grow along with the Start-up. The main roles are to port and optimize advanced audio processing algorithms in a variety of platforms and architectures (Windows, macOS, iOS, Android), as well as the development of GUI-based host applications interfacing with heterogeneous audio devices.

Qualifications:
– 3 years of experience in industry or as academic post-graduate.
– Excellent software design/programming skills in C/C++ and Objective-C.
– Solid knowledge of multi-threaded programming and debugging.
– Be able to multi-task and learn new technologies quickly.
– Thorough understanding of real-time digital audio.

Desired Skills:
– Working experience with Audio Units, VST, and Core Audio APIs.
– Experience with WebRTC as committer.

Education Bachelor or (preferrably) Master degree in Software Engineering, Electrical/Computer Engineering, or equivalent.

Conditions
Full-time employment in accordance to the Austrian regulations. A gross annual salary of EUR 39.550 can be expected. Overpayment is possible depending on qualifications and work experience. Workplace has very good connections with public transport.

How to Apply
Send your application by e-mail to contact@proactivaudio.pro with a single attachment (CV and proof documents) in PDF format.

About
proactivaudio is a young Austrian startup with an innovative patented technology for acoustic echo reduction (AER) and noise reduction. Our AER solution is fully operative under any hostile acoustic scenario, such as under permanent double talk, background noise and changes in the acoustic room. Altogether.

2. Software Engineer (f/m) – Embedded DSP
 
 
Location Vienna, Austria.

Description:
We are looking for a full-time software engineer with experience and passion for software development of audio systems. The successful candidate has the unique opportunity to grow along with the Start-up. The main roles are to port and optimize advanced audio processing algorithms in a variety of embedded systems, from Texas Instruments, Analog Devices and ARM Cortex.

Qualifications:
– 3 years of experience in industry or as academic post-graduate.
– Excellent software design/programming skills in C/C++ and SIMD instructions. – Solid knowledge of real-time embedded debugging.
– Be able to multi-task and learn new technologies quickly.
– Good knowledge of digital signal processing.

Desired Skills:
– Working experience with Code Composer Studio.
– Working experience with VisualDSP++ and CrossCore Embedded Studio.

Education Bachelor or (preferrably) Master degree in Software Engineering, Electrical Engineering, or equivalent.

Conditions:
Full-time employment in accordance to the Austrian regulations. A gross annual salary of EUR 39.550 can be expected. Overpayment is possible depending on qualifications and work experience. Workplace has very good connections with public transport.

How to Apply
Send your application by e-mail to contact@proactivaudio.pro with a single attachment (CV and proof documents) in PDF format.

About
proactivaudio is a young Austrian startup with an innovative patented technology for acoustic echo reduction (AER) and noise reduction. Our AER solution is fully operative under any hostile acoustic scenario, such as under permanent double talk, background noise and changes in the acoustic room. Altogether.
Top

6-33(2017-07-28) R&D Engineer/Scientist at VOCAPIA Research, Orsay, France

Vocapia Research is hiring an R&D Engineer/Scientist, to build and improve multilingual language processing systems relying on machine learning techniques. We are seeking candidates with substantial programming experience who are able to carry out careful experimental work, paying attention to details. We are looking for highly-motivated, goal-driven individuals who also desire to be team players.

Main qualifications

  • Master's, PhD or Engineering degree in Computer Science or Electrical Engineering
  • Experience in language processing, speech recognition or machine learning
  • Strong programming skills using C/C++ and scripting languages in a UNIX/Linux environment
  • Good knowledge of written and spoken English. Fluency in one or more other languages is a strong plus (in particular Arabic, Dutch, German, Italian, Japanese, Mandarin, Polish, Portuguese, Romanian, Russian, Spanish and Turkish)

Preference will be given to candidates who have experience in speech and language processing.

Location: Orsay, France (about 25km out of Paris )

To apply please submit your CV by email (including the job reference and your contact information) to recruit(at)vocapia.com or directly fill in the online application form (http://www.vocapia.com/applyforjob.html?VR1706IRD)

PDF or PS file, please.

 
Top

6-34(2017-07-31) Postdoctoral researcher in Cryptography, Intelligent Voice Ltd, London City, UK

Intelligent Voice Ltd, London City, UK
Postdoctoral researcher in Cryptography

In the framework of a H2020 program of the European Commission, Intelligent Voice Ltd is receiving financial support to hire a postdoctoral researcher for one year on the following topic :

The cloud offers an ideal opportunity for storing large volumes of data. However, the storage of sensitive data such as speech in plain text format on the cloud is not permitted in many industry sectors such as finance, health care etc. Hence speech data should be encrypted before storage on the cloud, and because it contains biometric identifiers it must remain encrypted. The challenge then is to search over large amounts of encrypted speech and return encrypted search results that can be decrypted by the user only. Intelligent Voice are providers of the world's fastest speech to text engine, and we are looking for a talented researcher in semantic security and searchable encryption to join our research team. This post builds on existing research within Intelligent Voice on Searchable and Homomorphic cryptographic protocols for speech processing.

Applicants should have already completed, or be close to completing, a PhD in computer science, mathematics, or a related discipline. Applicants should have an excellent research track record demonstrated by publications at major cryptography/security venues, and should have significant experience in the design and deployment of cryptographic protocols.

To apply please send your CV (with publication list), a 1-page cover letter, and the names of at least two people who can provide reference letters (e-mail).

Contact: Gérard Chollet, Head of Research, Intelligent Voice Ltd

St Clare House, 30-33 Minories, London EC3N 1BP

gerard.chollet(at)intelligentvoice.com

Phone: +44 20 3627 2670

More Information: http://www.intelligentvoice.com/https://www.slideshare.net/cholletge/ppsp-icassp17v10-72961572



Closing Date for Applications: 2017-08-31

 

 

Top

6-35(2017-08-03) AI-NLP Scientist at Sparted, France


 
 
AI – NLP Researcher
 
 
 
COMPANY: SPARTED is an innovative and disruptive startup that is changing the way people learn. We offer companies and organizations a unique and scalable game platform for micro learning on mobile devices. www.sparted.com
 
MISSION:  In the context of ambitious and strategic project targeting to automatically generate questions from descriptive texts in a variety of semantical contexts, the mission consists in: • Establishing a state of the art of capabilities of NLP regarding the project • Designing and identifying the main milestones of a long-term research plan working collaboratively with a research lab  • Contributing to the development of proofs of concepts with our team • Spreading Machine Learning knowledge inside the team and train our engineers to Natural Language Processing specificities • Designing and administrating flexible, scalable datasets • Contributing to the company vision
 
REQUIRED SKILLS: • Depth and breadth of knowledge in Machine Learning and NLP • Experience solving analytical problems using quantitative approaches • Ability to manipulate and analyze data from varying sources • Knowledge in some programming language
PROFILE: You have or you’re about to defend a PhD in Machine learning, AI, Applied Mathematics, Data Science, Statistics , Computer Science or related technical field with a strong focus on Natural
Language Processing and related technologies such as pattern recognition, sequence-to-sequence models, word2vec, etc. You like taking up challenges, teamwork and building awesome products.
 
BONUS SKILLS THAT WOULD BE GREAT: • French language  • Experience in Text Mining • Beer pong master • Demonstrated experience with Natural Language Technologies and engines  • Familiarity with Semantic Technologies • White Russian expertise • Extreme skiing or Wingsuit practice
 
REMUNERATION: Depending on your status, talent and experience (€ 30K - xxK) Internship, PhD project (CIFRE) or other
 
LOCATION: The position is based in Paris, avenue Kléber, next to the Place de l’Etoile, in our so cool top floor premises with an exceptional view on Paris and the Eiffel tower.
 
JOIN THE FUN DISRUPTION: SPARTED is at war. The fun rebellion is running against the empire of boredom. SPARTED is strengthening, growing and completing its forces by recruiting superheroes, in addition to an exceptional 20 people team of world-class best developers, project managers, designers and other performers.
 
Fun disruptors are men and women, enthusiastic personalities involved in the future, smart cookies and bold people, wanting to continuously evolve, improve themselves and change the world. If you’re made of the right stuff, this position is the opportunity to join a daring project in a successful start-up with a strong identity and mindset, with great career prospects.
 
APPLICATION: Say us hello at start@sparted.com +33 6 52 14 86 9

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA