ISCApad #236 |
Saturday, February 10, 2018 by Chris Wellekens |
6-1 | (2017-09-01) TheVoice project - PhD Offers at IRCAM, Paris, FranceTheVoice project - PhD OffersTheVoice project (2017-2021) funded by the French Research National Agency (ANR) proposes two PhD thesis. TheVoice project aims to create voices for audiovisual production in the field of the creative, cultural, and entertainment industry. The scientific objective of the project is to study the voices of professional actors, naturally expressive, in order to create innovative voice design solutions. The consortium, composed of recognized laboratories and industrialists (Ircam, LIA, Dubbing Brothers), aims to consolidate a position of excellence for “Made-in-France” research and digital technologies, and to promote the French culture all over the the world. Deep learning for voice recommendationThe objective of the thesis is to create a voice recommendation system based on deep neural networks, by exploiting the entire “vocal palette” of professional actors and integrating information related to acoustics, perception (what the listener perceives, without context), and reception (what the spectator perceives “in situation” in a movie depending on his social and cultural expectations).
Contacts: jean-francois.bonastre@univ-avignon.fr, Nicolas.Obin@ircam.fr Expressive voice identity conversion
The objective of the thesis is to create a voice identity conversion system able to reproduce the voice of professional actors from naturally expressive and real acted conditions, by exploiting the audio tracks of movies, series, etc. The thesis will be based on the long-term experience in voice analysis and transformation and the existing voice conversion system developed at Ircam, currently used for professional productions. Contact: Nicolas.Obin@ircam.fr Candidates must have a master degree in computer science (or equivalent) with skills in audio signal processing, machine learning, and informatics (python, C++). A preliminary experience in speech processing would be greatly appreciated. Applications (CV + motivation letters) must be sent before: 15/09/2017
| ||||||||||||
6-2 | (2017-09-09) Positions at Reykjavik University's School of Science and Engineering, Iceland Applications are invited for a research position in text to speech systems at the Language and Voice Laboratory (lvl.ru.is) at Reykjavik University's School of Science and Engineering. The position is sponsored by the Icelandic Language Technology fund grant 'Environment for building text-to-speech synthesis for Icelandic.' The main aim of the work will be to set up and advance research on speech synthesis back-end architectures for parametric speech synthesis. The successful candidate will work closely with the other members of the lab who focus on the language specific problems such as text normalization, phonemic analysis and phrasing. Even though the main objective of the work will be on Icelandic, working with other languages will be welcome. The successful candidate will contribute to the academic output of the lab as well as the publication of an open TTS environment for Icelandic.
| ||||||||||||
6-3 | (2017-09-09) PhD position in Computational Linguistics for Ambient Intelligence, University Grenoble Alpes, France Keywords: Natural language understanding, decision support system, smart
| ||||||||||||
6-4 | (2017-09-09) New PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ). IRCAM ( www.ircam.fr ) and L2S at University Paris Saclay ( www.l2s.centralesupelec.fr ) are jointly offering two new PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ). The project gathers six different universities across four countries (USA, UK, Germany, France).
| ||||||||||||
6-5 | (2017-09-06) Research positions at Language and Voice Laboratory, Reykjavik, Iceland Applications are invited for a research position in text to speech systems at the Language and Voice Laboratory (lvl.ru.is) at Reykjavik University's School of Science and Engineering. The position is sponsored by the Icelandic Language Technology fund grant 'Environment for building text-to-speech synthesis for Icelandic.' The main aim of the work will be to set up and advance research on speech synthesis back-end architectures for parametric speech synthesis. The successful candidate will work closely with the other members of the lab who focus on the language specific problems such as text normalization, phonemic analysis and phrasing. Even though the main objective of the work will be on Icelandic, working with other languages will be welcome. The successful candidate will contribute to the academic output of the lab as well as the publication of an open TTS environment for Icelandic.
| ||||||||||||
6-6 | (2017-09-08) 2 Post-Doc/Research positions in Audio Music Content Analysis at IRCAM Paris France 2 Post-Doc/Research positions in Audio Music Content Analysis
| ||||||||||||
6-7 | (2017-09-11) Research Scientist - Speech at Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA
in Cambridge, MA, USA Responsibilities
Qualifications
Apply online at: https://merl.workable.com/jobs/516915/candidates/new
| ||||||||||||
6-8 | (2017-09-14) Post-doctoral position in NLP at Loria, Nancy, France
A very solid background in statistical machine learning. Strong publications. Solid programming skills to conduct experiments. Excellent level in English. Applicants should send to smaili@loria.fr: A CV A cover letter outlining the motivation Three most representative papers
| ||||||||||||
6-9 | (2017-09-28) Senior research engineer at Spitch AG, Zurich, Switzerland The company: Spitch AG (www.spitch.ch) is an international company focused on speech technologies based in Zurich, with offices in London, Moscow, Madrid, and Milan. The current position is for the Zurich headquarters.
We are looking for a Senior Research Engineer to improve our acoustic modeling and voice biometrics technologies. You will be an active member of the core R&D team and will help to develop training recipes, optimize the related existing processes and tools, and bring new technologies to the company. Your responsibility within the team will increase gradually: you will begin with understanding the basic technology of the company and using it to generate acoustic and speaker verification models for internal projects, and, after some adaptation time, you will work on our core technology to keep it at the state-of-the-art level. In this role, you should be able to work independently and also to discuss problems and solutions with the rest of the team. You should have excellent organization and problem-solving skills.
This is a permanent position, with a desired start date in November.
If you are interested in this offer and believe you are a good match, please send us a copy of your updated CV to hr@spitch.ch
| ||||||||||||
6-10 | (2017-10-03) Assistant professor in phonetics/laboratory phonology, University of Delaware, USA JOB AD FALL 2017
University of Delaware seeks an assistant professor in phonetics/laboratory phonology
| ||||||||||||
6-11 | (2017-10-05) Language Resources Project Manager - Junior (m/f), ELDA, Paris, France The European Language resources Distribution Agency (ELDA), a company specialized in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a Language Resources Project Manager ? Junior position. This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Engineering field. Language Resources Project Manager - Junior (m/f)Under the supervision of the Language Resources Manager, the Language Resources Project Manager ? Junior will be in charge of the identification of Language Resources (LRs), the negotiation of rights in relation with their distribution, as well as the data preparation, documentation and curation. The position includes, but is not limited to, the responsibility of the following tasks:
Profile:
All positions are based in Paris. Applications will be considered until the position is filled. Salary is commensurate with qualifications and experience. ELDA ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends. For further information about ELDA and ELRA, visit:
| ||||||||||||
6-12 | (2017-10-14) Postdoc Amélioration des outils vocaux d'évaluation de la prononciation en apprentissage de langues, Loria, Nancy, France Postdoc Amélioration des outils vocaux d'évaluation de la prononciation en apprentissage de langues
Lieu : LORIA (Nancy, France)
Equipe : MULTISPEECH
Durée : 16 mois
Début : Hiver 2018
Contact :
Slim Ouni slim.ouni@loria.fr
Denis Jouvet denis.jouvet@loria.fr
Contexte
MULTISPEECH étudie différents aspects de la modélisation de la parole, tant pour la reconnaissance de la parole que pour la synthèse de la parole. Les approches développées mettent en ?uvre du traitement du signal et des modèles statistiques. Les modélisations les plus récentes reposent sur les réseaux de neurones et l?apprentissage profond qui ont apporté des gains substantiels de performance dans de nombreux domaines.
Les technologies vocales peuvent également servir pour l?apprentissage de langues. L?objectif consiste alors à détecter les défauts de prononciation des apprenants (prononciation des sons et intonation), à poser des diagnostics et à aider l?apprenant à améliorer sa prononciation en lui fournissant des informations multimodales (textuelles, sonores et visuelles). Plusieurs projets collaboratifs récents ont porté sur ce thème et ont permis de constituer des corpus de parole d?apprenants, d?analyser la parole non-native d?apprenants et d?approfondir la fiabilité de retours automatiques vers l?apprenant.
Dans le cadre du projet collaboratif e-FRAN METAL qui porte sur l?utilisation du numérique dans l?éducation, ces techniques seront adaptées, enrichies, et mises en oeuvre pour aider à l?apprentissage d?une langue étrangère à l?école. Des expérimentations sont prévues dans des classes de collège et de lycée.
Missions
Les travaux menés porteront sur l?amélioration et le développement d?outils vocaux pour l?évaluation des prononciations, tant au niveau des sons qu?au niveau de l?intonation. Un point important à étudier et à approfondir concerne la fiabilité des traitements et des mesures effectués (e.g., durée des sons issues de la segmentation phonétique, et valeurs de la fréquence fondamentale), et la prise en considération de ces informations de fiabilité des mesures dans l?établissement des diagnostics sur les défauts de prononciation, et les retours vers les apprenants.
Après une adaptation des outils et modèles au contexte non-natif des apprenants d?une langue étrangère, la plus grande partie du projet sera consacrée à des aspects plus innovants, et concernera l?étude d?approches à base d?apprentissage profond pour la détection de défauts de prononciation, et l?estimation des incertitudes sur les mesures faites (durées des sons et valeurs de la fréquence fondamentale) afin d?assurer la fiabilité des diagnostics effectués.
Profil et compétences recherchées
- Connaissances en traitement de la parole, reconnaissance de la parole, ou synthèse de la parole
- Connaissance, voire maitrise d?un toolkit de reconnaissance de la parole
- Expérience des réseaux de neurones, et si possible maitrise d?un toolkit de réseaux de neurones
- Bonnes compétences en informatique, et en programmation
L?annonce complète se trouve ici :
Français:
English
| ||||||||||||
6-13 | (2018-10-14) Ingénieur Consolidation et adaptation des outils vocaux d'aide à l'apprentissage de langues, Loria, Nancy ,France Ingénieur Consolidation et adaptation des outils vocaux d?aide à l?apprentissage de langues
Lieu : LORIA (Nancy, France)
Equipe : MULTISPEECH
Durée : 12 mois (prolongation possible)
Début : Automne 2017
Contact :
Slim Ouni slim.ouni@loria.fr
Denis Jouvet denis.jouvet@loria.fr
Contexte
MULTISPEECH étudie différents aspects de la modélisation de la parole, tant pour la reconnaissance de la parole que pour la synthèse de la parole. Les approches développées mettent en ?uvre du traitement du signal et des modèles statistiques. Les modélisations les plus récentes reposent sur les réseaux de neurones et l?apprentissage profond qui ont apporté des gains substantiels de performance dans de nombreux domaines.
Les technologies vocales peuvent également servir pour l?apprentissage de langues. L?objectif consiste alors à détecter les défauts de prononciation des apprenants (prononciation des sons et intonation), à poser des diagnostics et à aider l?apprenant à améliorer sa prononciation en lui fournissant des informations multimodales (textuelles, sonores et visuelles). Plusieurs projets collaboratifs récents ont porté sur ce thème et ont permis de constituer des corpus de parole d?apprenants, d?analyser la parole non-native d?apprenants et d?approfondir la fiabilité de retours automatiques vers l?apprenant.
Dans le cadre du projet collaboratif e-FRAN METAL qui porte sur l?utilisation du numérique dans l?éducation, ces techniques seront adaptées, enrichies, et mises en oeuvre pour aider à l?apprentissage d?une langue étrangère à l?école. Des expérimentations sont prévues dans des classes de collège et de lycée.
Missions
Dans ce cadre, la première mission consistera à consolider les outils vocaux d?aide à l?évaluation de prononciations, et à les adapter à l?utilisation prévue dans le projet. Cela nécessitera la collecte de voix d?adolescents (correspondant aux niveaux ciblés pour les expérimentations en collège et lycée) et l?adaptation des modèles acoustiques aux voix d?adolescents. Compte tenu des outils informatiques disponibles dans les classes, un mode de fonctionnement client-serveur sera privilégié.
La suite des travaux portera sur le développement de la version globale du système d?apprentissage de la prononciation et son expérimentation dans des classes de collège et de lycée. Le système développé devra intégrer des aspects de présentation d?exemples, d?évaluation de prononciations et de retours vers l?apprenant sur la qualité des prononciations.
Profil et compétences recherchées
- Connaissances en traitement de la parole, reconnaissance de la parole, ou synthèse de la parole
- Connaissance, voire maitrise d?un toolkit de reconnaissance de la parole
- Bonnes compétences en informatique, et en programmation
Annonce complète se trouve ici :
Français :
English
| ||||||||||||
6-14 | (2017-10-17) Call for Multiple PhD positions in Human-Machine Interaction funded by the ANIMATAS Innovative Training Network Call for Multiple PhD positions in Human-Machine Interaction funded by the ANIMATAS Innovative Training Network
ANIMATAS (MSCA ? ITN ? 2017 - 765955 2) is a H2020 Marie Sklodowska Curie European Training Network funded by Horizon 2020 (the European Union?s Framework Programme for Research and Innovation), coordinated by Université Pierre et Marie Curie (Paris, France). ANIMATAS partners are: UPMC (coord.), Uppsala Univ., Institut Mines Telecom, KTH, EPFL, INESC-ID, Jacobs Univ. and SoftBank Robotics Europe.
Scientific and technical objectives ANIMATAS focuses on the following objectives: 1) Exploration of fundamental questions relating to the interconnections between robots and virtual characters? appearance, behaviours and perception by people 2) Development of new social learning mechanisms that can deal with different types of human intervention and allow robots and virtual characters to learn in an unconstrained manner 3) Development of new approaches for robots and virtual characters? personalised adaptation to human users in unstructured and dynamically evolving social interactions Multiple Positions in Human-Machine Interaction:
15 early-stage researchers (ESR) positions of 36 months are available within ANIMATAS. The successful candidates will participate in the network?s training activities offered by the European academic and industrial participating teams. The successful candidate will be have the opportunity to work with Interactive Robotics, Furhat Robotics, Mobsya, University of Wisconsin- Madison, University of Southern California, Immersion SAS, IDMind, Trinity College Dublin.
Details and specific deadlines are available at http://animatas.isir.upmc.fr/
ESR 1 - Social context effects on expressive behaviour of embodied systems
Contact: Arvid Kappas (Jacobs Uni)
ESR 2 - Modeling communicative behaviours for different roles of pedagogical agents
Contact: Catherine Pelachaud (UPMC)
ESR 3 - Modeling trust in human-robot educational interactions
Contact: Ginevra Castellano (UU)
ESR 4 - Synthesis of Multi-Modal Socially Intelligent Human-Robot Interaction
Contact: Amir Pandey (SBR)
ESR 5 - Socially compliant behaviour modelling for artificial systems and small groups of teachers and learners
Contact: Christopher Peters (KTH)
ESR 6 - Teacher orchestration of child-robot interaction
Contact: Pierre Dillenbourg (EPFL)
ESR 7 - Which mutual-modelling mechanisms to optimize child-robot interaction
Contact: Pierre Dillenbourg (EPFL)
ESR 9 - Learning from and about humans for robot task learning
Contact: Mohamed Chetouani (UPMC)mohamed.chetouani@upmc.fr ESR 10 - Let?s Learn by Collaborating with Robots
Contact: Francisco Melo (INESC-ID)
ESR 11 - Disfluencies and teaching strategies in social interactions between a pedagogical agent and a student
Contact: Chloé Clavel (IMT)
PhD 12 - Automatic assessment of engagement during multi-party interactions
Contact: Pierre Dillenbourg (EPFL)
ESR 13 - Automatic synthesis and instantiation of proactive behaviour by robot during human robot interaction: going beyond just being active or reactive
Contact: Amit Pandey (SBR)
ESR 14 - Socio-affective effects on the degree of human-robot cooperation
Contact: Ana Paiva (INESC-ID)
ESR 15 - Adaptive self-other similarity in facial appearance and behaviour for facilitating cooperation between humans and artificial systems
Contact: Christopher Peters (KTH).
chpeters@kth.se Requirements: To apply, candidates must submit their CV, a letter of application, two letters of reference and academic credentials to the recruitment committee, Mohamed Chetouani (network coordinator), Ana Paiva and Arvid Kappas at contact-animatas@listes.upmc.fr and to the main supervisor of the research project of interest. Please note that different deadlines of applications occur and are detailed here : http://animatas.isir.upmc.fr/
Reviewing and selection of applications will start in October 2017. The positions will remain open until filled.
The application procedure will be carried out in compliance with the Code of Conduct for Recruitment of the European Charter and Code for Researchers.
Contacts:
Mohamed CHETOUANI (ANIMATAS Coord.)
Emily MOTU (Project Manager)
| ||||||||||||
6-15 | (2018-10-25) PhD vacancy at Aalborg University, Denmark PhD Stipend in Low-resource Keyword Spotting for Hearing Assistive Devices at Aalborg University, Denmark
| ||||||||||||
6-16 | (2017-10-27) Postdoc in ASR and Language Modeling, Aalto University, Finland Postdoctoral researcher in Speech Recognition and Language Modeling
The speech recognition group (led by Prof. Mikko Kurimo) at Aalto University, Finland, focuses on machine learning in automatic speech recognition (ASR) and language modeling. The group developed a state-of-the-art fixed vocabulary ASR system already in 1970's and an unlimited vocabulary neural phonetic typewriter in 1980's led by Academician Teuvo Kohonen. Since 2000 led by Prof. Mikko Kurimo, the group has developed state-of-the-art unlimited vocabulary ASR systems using sub-word language models for several languages. The most recent achievement is winning the 3rd Multi-Genre Broadcast ASR challenge in 2017, where the top research groups in the field were challenged to build a recognizer for an under-resourced language using machine learning methods. The speech recognition group consists of a couple of senior researchers, post-docs and six PhD students that bring together expertise from many relevant areas such as acoustic modeling, lexical modeling, language modeling, decoding, machine learning, machine translation, user interfaces, and toolkits such as Kaldi, AaltoASR, TheanoLM, VariKN, and Morfessor. We operate in a well-connected academic environment using excellent GPU and CPU computing facilities and have a functional office space at Aalto University Otaniemi campus that is only 10 minutes away from downtown Helsinki via the new subway line. We are now looking for a 1-3 year postdoc to start asap on any of our research themes:
The position requires a relevant doctoral degree in CS or EE, skills for doing excellent research in an (English-speaking) group, and outstanding research experience in at least one of the research themes mentioned above. More specifically, programming skills and good command of Kaldi and DNNs (either Theano or TensorFlow toolkits) will be useful. The candidate is expected to perform high-quality research and participate in the supervision of the PhD students. The application, CV, list of publications, references and requests for further information should be sent by email to Prof. Mikko Kurimo (mikko.kurimo at aalto.fi). Aalto University is a new university created in 2010 from the merger of the Helsinki University of Technology, Helsinki School of Economics and the University of Art and Design Helsinki. The University?s cornerstones are its strengths in education and research, with 20,000 basic degree and graduate students. In addition to a decent salary, the contract includes occupational health benefits, and Finland has a comprehensive social security system. The Helsinki Metropolitan area forms a world-class information technology hub, attracting leading scientists and researchers in various fields of ICT and related disciplines. Moreover, as the birthplace of Linux, and the home base of Nokia/Alcatel-Lucent/Bell Labs, F-Secure, Rovio, Supercell, Slush (the biggest annual startup event in Europe) and numerous other technologies and innovations, Helsinki is fast becoming one of the leading technology startup hubs in Europe. See more e.g. athttp://www.investinfinland.fi/. As a living and working environment, Finland consistently ranks high in quality of life, and Helsinki, the capital of Finland, is regularly ranked as one of the most livable cities in the world. See more athttps://finland.fi and http://www.helsinkitimes.fi/finland/finland-news/domestic/14966-helsinki-ranked-again-as-world-s-9th-most-liveable-city.htmlHome page of the group: http://spa.aalto.fi/en/research/research_groups/speech_recognition/
| ||||||||||||
6-17 | (2017-10-25) Speech scientist at ELSA, Lisbon, PortugalSpeech scientist at ELSALocation: Lisbon, Portugal contact: people@elsaspeak.com Job DescriptionWe are looking for an splendid Speech Scientist to join our team al ELSA and help us in our mission to help every language learner speak a foreign language fluently and confidently. As a speech scientist al ELSA you will be playing with state-of-the-art machine learning and applying them to tons of audio and text data, towards building new technology and improving existing algorithms. At ELSA you will not do your job alone. Solving a problem includes working with a team (backend, speech scientists, product managers, designers) to design solutions that would impact hundreds of thousands of users. We are an agile team. We are passionate engineers. We are highly collaborative. We value results. We go above and beyond to help our users and to deliver superb products, but we also value work-life balance - we are motorcycle riders, mountain climbers, food-enthusiasts, espresso lovers, yoga students, proud parents. With more than 1.5M app downloads, your research will be directly influenced and will influence the many thousands of users that use our app daily. Your role
Requirements
What we offer
ApplicationSend your LinkedIn profile or your CV to people@elsaspeak.com and we will get in touch. About us
ELSA (English Language Speech Assistant) Corp. is a San Francisco-based startup with engineering offices in Lisbon. Our vision is to enable everyone to speak foreign languages with full confidence, reaching better life and career opportunities. Our flagship product, ELSA speak, is a personal mobile coach who improves our users' English pronunciation and intonation using phoneme and suprasegmental analysis of the user's speech signal. Our backend servers implement state-of-the-art speech recognition technology to pinpoint errors and give accurate and consistent feedback to our users on how to improve.
| ||||||||||||
6-18 | (2017-10-25) Senior Speech scientist at ELSASenior speech scientist at ELSALocation: flexible, where would you like to live? contact: people@elsaspeak.com Job descriptionWe are looking for an splendid Senior Speech Scientist to join our team al ELSA and help us in our mission to help every language learner speak a foreign language fluently and confidently. As a Senior Scientist at ELSA you will be responsible for the evolution of the speech technology powering our language assessment services. Your efforts will translate directly to new voice-enabled features and better assessment quality for hundreds of thousands of language learners that use ELSA app. In addition, you will work directly with the CTO to define and drive the scientific roadmap of the company. At ELSA you will not do your job alone. Solving a problem includes working with a team (backend, speech scientists, product managers, designers) to design solutions that would impact hundreds of thousands of users. We are an agile team. We are passionate engineers. We are highly collaborative. We value results. We go above and beyond to help our users and to deliver superb products, but we also value work-life balance - we are motorcycle riders, mountain climbers, food-enthusiasts, espresso lovers, yoga students, proud parents. Join our world-class team and be one of the personalities that make the ELSA culture awesome. Requirements:
Bonus skills
What we offer
ApplicationSend your an email with your LinkedIn profile or your CV to people@elsaspeak.com and we will get back to you. About usELSA Corp. is a US (San Francisco) startup with engineering offices in Lisbon. Our vision is to enable everyone to speak foreign languages with full confidence, reaching better life and career opportunities, powered by our proprietary speech recognition technology using deep learning. Our flagship product, ELSA speak, is a personal mobile coach who improves our users' English pronunciation and intonation using phonetic and suprasegmental analysis of the user's speech. Our backend servers implement state-of-the-art speech assessment technology to pinpoint the user?s most outstanding errors and give accurate and consistent feedback on how to solve them.
| ||||||||||||
6-19 | (2017-10-26) Two post-doctoral researchers at Idiap Research Institute, Martigny, Switzerland Openings for two post-doctoral researchers at Idiap Research Institute.
| ||||||||||||
6-20 | (2017-11-02) Postdoctoral research position: Acoustic cough detection and processing for healthcare, University of Stellenbosch, South Africa. November 2017
| ||||||||||||
6-21 | (2017- 11-02) Postdoctoral research position: Extremely-low-resource radio browsing for humanitarian monitoring in rural Africa, University of Stellenbosch, South Africa Postdoctoral research position: Extremely-low-resource radio browsing for humanitarian monitoring in rural Africa
| ||||||||||||
6-22 | (2017-11-05) 15 PhD positions from mid-2018 Le réseau de formation sur le traitement automatique de la parole pathologique (TAPAS) est un projet H2020 MSCA-ITN-ETN qui fournira à 15 doctorants une formation large et intensive sur le traitement de la parole pathologique. Le consortium du projet TAPAS comprend des praticiens cliniques, des chercheurs universitaires et des partenaires industriels, avec une expertise couvrant l'ingénierie de la parole, la linguistique et la science clinique. Le programme de travail TAPAS est organisé autour de trois grands thèmes:
| ||||||||||||
6-23 | (2017-11-08) Visiting Assistant Professor in Computational Linguistics and Language Science, Rochester, NY, USA Visiting Assistant Professor in Computational Linguistics and Language Science http://apptrkr.com/1116774
Detailed Job Description: · Deep learning for natural language understanding · Speech and speech technology · Multimodal and linguistic sensors · Human-computer interaction · Linguistic narrative analytics · Advanced graduate coursework in computational linguistics, including natural language and/or spoken language processing or technical methods in linguistics. · Publication record and coherent plan for research and grant seeking activities. · Evidence of outstanding teaching. · Ability to contribute in meaningful ways to the college's continuing commitment to cultural diversity, pluralism, and individual differences. · A research statement · A teaching statement · Copy of transcripts of graduate coursework · A sample publication · The names, addresses, and phone numbers for three references
Questions regarding this position can be directed to the search committee chair-Dr. Cecilia Ovesdotter Alm at coagla@rit.edu.
| ||||||||||||
6-24 | (2017-11-10) Principal Speech Recognition Engineer, Speechmatics, Cambridge, UK Principal Speech Recognition Engineer Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a talented and experienced speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player, who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience
Desirable
Salary We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-25 | (2017-11-10) Principal Language Modelling Engineer, Speechmatics, Cambridge, UK Principal Language Modelling Engineer Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a talented and experienced Language Modelling expert to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is working on our core ASR capabilities to improve our speed, accuracy and support for all languages. Your role will include making sure our Language Modelling capability remains at the head of the field. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models, and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience Essential
Desirable
Salary We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-26 | (2017-11-10) Principal Acoustic Modelling Engineer, Speechmatics, Cambridge,UK Principal Acoustic Modelling Engineer Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a talented and experienced Acoustic Modelling expert to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is working on our core ASR capabilities to improve our speed, accuracy and support for all languages. Your role will include making sure our Acoustic Modelling capability remains at the head of the field. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models, and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience Essential
Desirable
We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-27 | (2017-11-10) Speech Recognition Intern, Speechmatics, Cambridge, UK Speech Recognition Intern Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we’re not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a bright, enthusiastic and talented speech recognition intern to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience Essential
Desirable
Salary This will be a paid internship. We also have several additional benefits including massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-28 | (2017-11-10) Speech Recognition Engineer, Speechmatics, Cambridge, UK Speech Recognition Engineer Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a talented speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience Essential
Desirable
Salary We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-29 | (2017-11-10) Senior Speech Recognition Engineer, Speechmatics, Cambridge,UK Senior Speech Recognition Engineer Location: Cambridge, UK Contact: careers@speechmatics.com Background Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication. In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite! We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below! We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction. The Opportunity We are looking for a talented and experienced speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available. Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company. Key Responsibilities
Experience Essential
Desirable
Salary We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!
| ||||||||||||
6-30 | (2017-11-12) Language Resources Project Manager - Junior (m/f), ELDA, Paris, France nceThe European Language resources Distribution Agency (ELDA), a company specialized in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a Language Resources Project Manager ? Junior position. This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Engineering field. Language Resources Project Manager - Junior (m/f)Under the supervision of the Language Resources Manager, the Language Resources Project Manager ? Junior will be in charge of the identification of Language Resources (LRs), the negotiation of rights in relation with their distribution, as well as the data preparation, documentation and curation. The position includes, but is not limited to, the responsibility of the following tasks:
Profile:
All positions are based in Paris. Applications will be considered until the position is filled. Salary is commensurate with qualifications and experience. ELDA ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends. For further information about ELDA and ELRA, visit:
| ||||||||||||
6-31 | (2017-11-15) PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech), University of Trento, Italy Title: 2018 PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech)
| ||||||||||||
6-32 | (2017-11-20) Audio Signal Processing Maverick (Applied Research), AVA, Paris, France Audio Signal Processing Maverick (Applied Research)
ASR research is sadly mostly largely done in big companies. Sure, talent is around, but it's really only in a less crowded space that you can really shine and see the impact that you can do. A.K.A. an early-stage startup, when you’re still a group of friends with a crazy ambition to change the world. Here’s what drives us nuts: Products where ASR is really the key component are 99% of the time made for: *answering quick/superficial requests from lazy users (Siri, Google Now, Cortana, Echo) *dealing with angry customers on the phone (all the IVRs) *dictating emails for busy people (Nuance) What if you could truly change 400M lives instead? Turn a lifetime of frustration into a deep connection? Ava aims at captioning the world to make it fully accessible, 24/7, to deaf & hard-of-hearing people. Mobile-first, the app is the fastest & most advanced captioning system in the world, beating what tech giants have done, by cleverly using speech and speaker identification technologies to make conversations between deaf & hard-of-hearing people and hearing people possible. At Ava, the CEO is the only hearing person in a family of deaf people, and the CTO is deaf and non-speaking - both were Forbes 30 under 30 2017. We use our ASR-based product everyday to communicate. Our motivations are aligned with the change we want to make in the world. We care about the millions of people out there who struggle everyday to just have a social & professional life and that YOUR tech will help. If it wasn't for Ava, the next best solution would take 10X time (it's not a solution) or 100X cost (it's not accessible to all). We’re working with companies such as GE, Nike, Salesforce, but also universities, stores, and even churches to fulfill our mission to make the world truly accessible. What we need to get to the next level? You - someone with prior research experience in audio signal processing. The core mission will be to enhance a speech recognition system used in real world cocktail party situations. The signal is acquired via an array of ad-hoc microphones, and is processed to optimize its quality for the transcription, using a set of techniques: source localization, Time Difference of Arrival, noise cancelling, source separation… all in real time. Interested to learn more about it? Let’s chat. Especially if: ● You just finished a PhD in audio signal processing. ● You’re ready to be a pioneer in the field, and do what is necessary to make things work in real world situations. ● You're of the persistent, yet open-minded and collaborative type: you reason by independent thinking first, but you know that together, we're stronger. What you get ● Early-stage -> massive equity opportunity. ● An opportunity to apply cutting-edge technologies to solve real world problems, right now ● Competitive salary The job will be based in our Paris office.
Interested? Let us know at alex@ava.me
| ||||||||||||
6-33 | (2017-11-20) Speaker Identification Maverick (Applied Research) , Ava, Paris, France Speaker Identification Maverick (Applied Research) ● You just finished a PhD in Machine Learning. ● Experience in Speaker Identification, ASR, NLP, acoustic modeling, language models or source separation is a plus. ● You ambition to be a pioneer in the field, and do what is necessary to make things work in real world situations. ● You're of the persistent, yet open-minded and collaborative type: you reason by independent thinking first, but you know that together, we're stronger.
What we offer ● Early-stage -> massive equity opportunity. ● An opportunity to apply cutting-edge technologies to solve real world problems, right now ● Competitive salary The job will be based in our Paris office.
Interested? Let us know at alex@ava.me.
| ||||||||||||
6-34 | (2017-11-21) PhD position in Opinion Analysis in human-agent interactions, Telecom ParisTech, Paris France PhD position in Opinion Analysis in human-agent interactions
Telecom ParisTech [1] 46 rue Barrault 75013 Paris ? France
Possibility to start with an internship during first semester 2018.
Duration of the PhD funding: 36 months
*Position description*
The PhD student will take part in the ANR JCJC MAOI (Multimodal Analysis of Opinions in Interactions) at Telecom-ParisTech. He/She will tackle the following challenging issue: the integration of opinion mining methods in human-agent interactions (i.e. companion robots or virtual vocal assistants such as Siri, Google Now, Cortana, etc.) The role of the PhD will consist in developing machine learning methods for the multimodal (i.e. speech and text) analysis of the user?s opinion during his/her interaction with an agent. The main challenge will be to integrate the interaction context in machine-learning opinion detection methods. The work will include: - the development of machine learning/deep learning approaches (Conditional Random Fields, Long-Short-Term-Memory networks) - the integration of complex and interactional linguistic features in machine-learning models for the detection of opinions in interactions - the integration of acoustic features in multimodal models - the evaluation of the system in interaction context. The PhD will join the Social Computing topic [2] in the S2a group [3] at Telecom-ParisTech. Selected references for this position from [4] : Barriere, V., Clavel, C., and Essid, E. (2017). Opinion dynamics modeling for movie review transcripts classification with hidden conditional random fields. Interspeech. Clavel, C.; Callejas, Z., Sentiment analysis: from opinion mining to human-agent interaction, Affective Computing, IEEE Transactions on, 7.1 (2016): 74-93 Langlet, C. and Clavel, C. (2015). Improving social relationships in face-to-face human-agent interactions: when the agent wants to know user?s likes and dislikes. In ACL, Beijin, China. Langlet, C. and Clavel, C. (2016). Grounding the detection of the user?s likes and dislikes on the topic structure of human-agent interactions. Knowledge-Based Systems.
As a minimum requirement, the successful candidate will have:
? A master degree or equivalent in one or more of the following areas: machine learning, natural language processing, affective computing ? Excellent programming skills (preferably in Python) ? Good command of English
The ideal candidate will also (optionally) have: ? Knowledge in natural language processing ? Knowledge in probabilistic graphical models and deep learning
-- More about the position ? Place of work: Paris, France ? For more information about Telecom ParisTech see [1]
-- How to apply Applications are to be sent to Chloé Clavel [4]
The application should be formatted as a single pdf file and should include: ? A complete and detailed curriculum vitae ? A letter of motivation ? The transcript of grades ? The names and addresses of two referees
[1] https://www.telecom-paristech.fr/eng/ [3] http://www.tsi.telecom-paristech.fr/ssa/# [4] https://clavel.wp.imt.fr/publications/
| ||||||||||||
6-35 | (2017-11-20) ASSISTANT PROFESSOR IN HUMAN-CENTERED COMPUTING, Virginia Tech, USA ASSISTANT PROFESSOR IN HUMAN-CENTERED COMPUTING
| ||||||||||||
6-36 | (2017-11-20) Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning), University of East Finland, Finland Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning)
The University of Eastern Finland, UEF, is one of the largest multidisciplinary universities in Finland. We offer education in nearly one hundred major subjects, and are home to approximately 15,000 students and 2,500 members of staff. From 1 August 2018 onwards, we?ll be operating on two campuses, in Joensuu and Kuopio. In international rankings, we are ranked among the leading universities in the world.
The Faculty of Science and Forestry operates on the Kuopio and Joensuu campuses of the University of Eastern Finland. The mission of the faculty is to carry out internationally recognised scientific research and to offer research-education in the fields of natural sciences and forest sciences. The faculty invests in all of the strategic research areas of the university. The faculty?s environments for research and learning are international, modern and multidisciplinary. The faculty has approximately 3,800 Bachelor?s and Master?s degree students and some 490 postgraduate students. The number of staff amounts to 560. http://www.uef.fi/en/lumet/etusivu
We are now inviting applications for three Postdoctoral Researcher/Project Researcher positions in speech processing and deep learning funded by Academy of Finland, School of Computing, Joensuu Campus.
o Two positions in automatic speaker rec, voice conversion, anti-spoofing (NOTCH project)
o One position in deep reinforcement learning for physical agents (DEEPEN project)
The two projects share similarities in terms of machine learning methods being used and developed further, but are otherwise differently focused.
The NOTCH research project (NOn-cooperaTive speaker CHaracterization), being led by Associate Professor Tomi Kinnunen, aims at advancing state-of-the-art in automatic speaker verification (defense) and voice conversion (attack) under a generic umbrella of non-cooperative speech, whether being induced by spoofing attacks, disguise, or other intentional voice modifications. A successful applicant needs to have background in speaker verification, anti spoofing, voice conversion, machine learning or closely related topics.
The DEEPEN research project (Deep Reinforcement Learning for Physical Agents) is run in co operation between UEF and robotics group at Aalto University. UEF?s part, lead by Senior Researcher Ville Hautamäki, aims at designing new statistical models for simulated robot control and to take steps towards solving the so-called ?reality gap? problem. The post-doc may also contribute to speech and deep learning topics. A successful applicant needs to have background in deep learning, reinforcement learning, speech technology or machine vision. Practical experience in DRL research environments (e.g. VizDoom or MuJoCo), will be counted as a plus.
The Machine Learning group of the School of Computing, at the facilities of Joensuu Science Park, provides access to modern research infrastructure and is a strongly international working environment. We hosted the Odyssey 2014 conference, were a partner in the H2020-funded OCTAVE project, and are a co-founder of the Automatic Speaker Verification and Countermeasures (ASVspoof) challenge series (http://www.asvspoof.org/).
A person to be appointed as a postdoctoral researcher shall hold a suitable doctoral degree that has been awarded less than five years ago. If the doctoral degree has been awarded more than five years ago, the post will be one of a project researcher. The doctoral degree should be in spoken language technology, electrical engineering, computer science, machine learning or a closely related field. Researchers finishing their PhD in the near future are also encouraged to apply for the positions. However, they are expected to hold a PhD degree by the starting date of the position. We expect strong hands-on experience and creative out-of-the-box problem solving attitude. A successful applicant needs to have an internationally proven track record in topics relevant to the project he or she applies to.
English may be used as the language of instruction and supervision in these positions.
The positions will be filled from earliest January 1, 2018 for a period of 12 months. The continuation of the position will be agreed separately. The position will be filled for a fixed term due to pertaining to a specific project (Postdoctoral researcher positions shall always be filled for a fixed term, UEF University Regulations 31 §).
The salary of the position is determined in accordance with the salary system of Finnish universities and is based on level 5 of the job requirement level chart for teaching and research staff (?2.865,30/ month). In addition to the job requirement component, the salary includes a personal performance component, which may be a maximum of 46.3% of the job requirement component.
For further information on the position, please contact (NOTCH): Associate Professor Tomi Kinnunen, email: tkinnu@cs.uef.fi, tel. +358 50 442 2647 and (DEEPEN): Senior Researcher Ville Hautamäki, email: villeh@cs.uef.fi, tel. +358 50 511 8271. For further information on the application procedure, please contact: Executive Head of Administration Arja Hirvonen, tel. +358 44 716 3422, email: arja.hirvonen@uef.fi.
A probationary period is applied to all new members of the staff.
You can use the same electronic form to apply for both research projects. The electronic application should contain the following appendices:
- a résumé or CV
- a list of publications
- copies of the applicant's academic degree certificates/ diplomas, and copies of certificates / diplomas relating to the applicant?s language proficiency, if not indicated in the academic degree certificates/diplomas
- motivation letter
- a cover letter indicating the position to be applied for
- The names and contact information of at least two referees are requested in the application form.
The application needs to be submitted no later than December 22, 2017 (by 24:00 EET) by using the electronic application form. Navigate to http://www.uef.fi/en/uef/en-open-positions and search for ?Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning)? to find the link to the electronic application form.
| ||||||||||||
6-37 | (2017-12-03) Machine Learning Engineer, Speech Recognition, Aja-la studios, Green Richmond UK Machine Learning Engineer, Speech Recognition developing acoustic and language models, and related algorithms, for our suite of proprietary speech recognition products for a broad library of under-resourced languages. This role provides a unique opportunity to pursue research and commercialization of speech recognition for under-resourced languages. in and/or demonstrate experience working with under-resourced languages, an interest in working on the entire R&D/product-development cycle. •Masters or PhD in an analytical discipline through which you have acquired a strong knowledge of topics including o Theory and practice of speech recognition and/or speech processing (LSCVR) o Signal Processing/Pattern Recognition o Probability theory o Bayesian inference o Machine learning and related topics • Strong software development skills o Required: C/C++, Python, CUDA/Nsight IDE, shell scripting, Perl, Github/SVN o Optional/Additional: Java/Android/Gradle/Android Studio, Objective C/Xcode/Cocos2dx • Speech processing, Neural Network and Natural Language platforms and libraries o Kaldi, KenLM, OpenFST, and HTS o Theano, PDNN, pyTorch, TensorFlow • Operating Systems: Unix/Linux/Mac OS 1 The Green Richmond, TW9 1PL UK www.ajalastudios.com
amongst other benefits.
| ||||||||||||
6-38 | (2017-12-04) Machine Learning Engineer, Speech Synthesis , Aja-la studios,Green Richmond UK Machine Learning Engineer, Speech Synthesis topics including o Theory and practice of speech synthesis and/or speech processing, e.g. vocoding o Signal Processing/Pattern Recognition o Probability theory o Bayesian inference o Machine learning and related topics • Strong software development skills o Required: C/C++, Python, CUDA/Nsight IDE, shell scripting, Perl, Github/SVN o Optional/Additional: Java/Android/Gradle/Android Studio, Objective C/Xcode/Cocos2dx • Speech processing, Neural Network and Natural Language platforms and libraries o Festival, HTK, and HTS o Theano, PDNN, pyTorch, TensorFlow • Operating Systems: Unix/Linux/Mac OS 1 The Green Richmond, TW9 1PL UK www.ajalastudios.com amongst other benefits.
| ||||||||||||
6-39 | (2017-12-03) Research Assistant/Associate in Speech Processing, at Cambridge University Engineering Department, Cambridge, UK. Research Assistant/Associate in Speech Processing, at Cambridge University Engineering Department, Cambridge, UK.
| ||||||||||||
6-40 | (2017-12-05) One-year post-doctoral position in speech production, GIPSA, Grenoble, France One-year post-doctoral position in speech production, in the framework of the StopNCo ANR project (http://www.agence-nationale-recherche.fr/Project-ANR-14-CE30-0017 <http://www.agence-nationale-recherche.fr/Project-ANR-14-CE30-0017>), starting from March 2018 (at the latest in October 2018).
| ||||||||||||
6-41 | (2017-12-06) PhD Position in Social Signal Processing for Multi-Sensor Conversation Quality Modeling, Delft University, The Netherlands Job Link: https://tinyurl.com/MINGLEPhD PhD Position in Social Signal Processing for Multi-Sensor Conversation Quality Modeling Location: Delft University of Technology, The Netherlands Deadline: January 12 2018 (see below for application procedure) Project Description: An important but under-explored problem in computer science is the automated analysis of conversational dynamics in large unstructured social gatherings such as networking or mingling events. Research has shown that attending such events contributes greatly to career and personal success. While much progress has been made in the analysis of small pre-arranged conversations, scaling up robustly presents a number of fundamentally different challenges. Unlike analysing small pre-arranged conversations, during mingling, sensor data is seriously contaminated. Moreover, determining who is talking with whom is difficult because groups can split and merge at will. A fundamentally different approach is needed to handle both the complexity of the social situation as well as the uncertainty of the sensor data when analysing such scenes. The successful applicants will develop automated techniques to analyse multi-sensor data (video, acceleration, audio, etc) of human social behavior. They will work as part of a team on the NWO Funded Vidi project MINGLE (Modelling Group Dynamics in Complex Conversational Scenes from Non-Verbal Behaviour). They will have the opportunity to interact with researchers from both computer science and social science both locally and internationally. The main aim of the project is to address the following question: How can multi-sensor processing and machine learning methods be developed to model the dynamics of conversational interaction in large social gatherings using only non-verbal behaviour? The two project advertised focus on developing novel computational methods to measure conversation quality (e.g. involvement, rapport) from multi-sensor streams in crowded environments
Job requirements: We are looking students who have recently completed or expect very soon an MSc or equivalent degree in computer science, electrical/electronic engineering, applied mathematics, applied physics, or a related discipline. Experience in the following or related fields are preferred: signal/audio/speech processing, computer vision, machine learning, and pattern recognition. Some experience with embedded systems is a bonus, though not necessary.
The successful applicant will have:- good programming skills;- curiosity and analytical skills;- the ability to work in a multi-disciplinary team;- motivation to meet deadlines;- an affinity with the relevant social science research;- good oral and written communication skills;-proficiency in English;- an interest in communicating their research results to a wider audience; Institution:
The department Intelligent Systems is part of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at Delft University of Technology. The faculty offers an internationally competitive interdisciplinary setting for its 500 employees, 350 PhD students and 1700 undergraduates. Together they work on a broad range of technical innovations in the fields of sustainable energy, quantum engineering, microelectronics, intelligent systems, software technology, and applied mathematics.??? The Pattern Recognition and BioInformatics Group is one of five groups in the department, consisting of 7 faculty and over 20 postdoc and PhD students. Within this group, research is carried out in three core subjects; pattern recognition, computer vision, and bioinformatics. One of the main focuses of the group is on developing tools and theories, and gaining knowledge and understanding applicable to a broad range of general problems but typically involving sensory data, e.g. times signals, images, video streams, or other physical measurement data. For information about the TU Delft Graduate School, please visit www.phd.tudelft.nl.???? Application Procedure: Interested applicants should send an up-to-date curriculum vitae, degree transcripts, letter of application, and the names and the contact information (telephone number and email address) of two references to Hr-eemcs@tudelft.nl with the subject heading '[MINGLE PhD]'.
| ||||||||||||
6-42 | (2017-12-08) PhD grant at IRISA, Rennes France L'équipe Expression de l'IRISA recrute un.e doctorant.e en Détails de l'offre : https://www-expression.irisa.fr/files/2017/12/these_TREMoLo_2017.pdf - CV détaillé* - lettre de motivation* - relevés de notes (avec classement si possible)* - contacts pour recommandation* - rapport(s) de stage recherche. gwenole.lecorve@irisa.fr. Gwénolé Lecorvé.
| ||||||||||||
6-43 | (2017-12-15) Internship 1 at LIA, Avignon, France Adaptation des réseaux de neurones profonds pour les systèmes de transcription de la parole prononcés dans un enregistrement audio ou vidéo. Les systèmes de RAP les plus robustes reposent souvent sur une architecture multi-passe (Gauvain et Lee 1994) (Gales 1998), chaque passe permettant d’obtenir une transcription du signal audio qui se veut de meilleure qualité que la précédente. Ainsi, dans certains cas, les sorties de la passe précédente sont utilisées pour adapter les modèles de la passe en cours. L’idée de cette adaptation est d’obtenir des modèles spécialisés à l’enregistrement, et donc d’être plus robuste face aux « variabilités » des enregistrements audio (conditions acoustiques différentes, locuteurs inconnus, spontanéité de la parole, bruits de l’environnement...). Plus précisément, le stage explorera l’adaptation non-supervisée des réseaux de neurones profonds. Un des principaux challenges est d’utiliser les réseaux de neurones en tant que modèle de langage et de pouvoir les adapter à une première transcription issue du décodage. programmation (C/C++ et/ou Python). Des notions en Traitement Automatique de la Langue, Traitement de la parole ou Apprentissage automatique serait un plus. Computer Speech and Language (CSL), 1998. Gauvain, Jean-Luc, et Chin-Hui Lee. «Maximum a posteriori estimation for multivariate Gaussian mixture observations of Markov chains.» IEEE Transactions on Speech and Audio Processing (TASP), 1994.
| ||||||||||||
6-44 | (2017-12-15) Internship 2 at LIA Avignon, France Résumé vidéo automatique par contextualisation de vidéo à partir d’un texte les présentent de façon aussi concise que possible. Dans ce stage nous nous intéressons aux méthodes de résumé vidéo par extraction basées sur l’analyse du texte [Li11, Trione14, Favre15]. représentation intermédiaire textuelle : le contenu audio de la vidéo (et parfois les textes incrustés) sont extraits, transcrits puis résumés. Ce résumé texte est ensuite utilisé pour assembler un résumé vidéo. L’objectif général du stage est d’explorer des méthodes de contextualisation de vidéos ou d’images à partir de la transcription texte. Cette contextualisation doit aider à la composition du résumé vidéo final. (C/C++ et/ou Python). Des notions en Traitement Automatique de la Langue ou Apprentissage automatique seraient un plus. of the 19th ACM international conference on Multimedia (pp. 1573-1576). ACM. [Trione14] Trione, J. (2014). Extraction methods for automatic summarization of spoken conversations from call centers (Méthodes par extraction pour le résumé automatique de conversations parlées provenant de centres d’appels)[in French]. In Proceedings of TALN 2014 (Volume 4: RECITAL-Student Research Workshop) (Vol. 4, pp. 104-111). [Favre15] Favre, B., Stepanov, E. A., Trione, J., Béchet, F., & Riccardi, G. (2015). Call Centre Conversation Summarization: A Pilot Task at Multiling 2015. In SIGDIAL Conference (pp. 232-236).
| ||||||||||||
6-45 | (2017-12-13) Internship and PhD position at Telecom-ParisTech and LTCI lab, Paris, France
Internship and PhD position in machine learning for multimodal engagement analysis in human-robot interactions (HRI)
Telecom ParisTech [1], LTCI lab [2]
Salary: according to background and experience
*Position description*
The internship/PhD project will take part in a collaboration between Softbank Robotics and Télécom ParisTech on the topic of engagement analysis in interactions of humans with Softbank?s robots. The role of the intern/PhD student will consist in developing robust machine learning systems able to effectively take advantage of the multimodal signals acquired by the robot?s sensors during its interaction with a human. The work will include: - the design of appropriate elicitation protocols and multimodal data acquisition procedures ; - the development of multimodal feature learning and dynamic classification procedures capable of handling noisy observations with missing values, especially exploiting deep learning techniques ; - the evaluation of the system in realistic scenarios involving end-users. The PhD project will be hosted at Telecom ParisTech department of images, data and signals of [3], jointly by the social computing [4] and the audio data analysis and signal processing [5] teams.
As a minimum requirement, the successful candidate will have:
? A Master?s degree (possibly to be granted in 2018) in one of the following areas: computer science, artificial intelligence, machine learning, signal processing, affective computing, applied mathematics ? Excellent programming skills (preferably in Python) ? Good command of English
The ideal candidate will also (optionally) have: ? Knowledge in deep learning techniques
-- More about the position ? Place of work: Paris, France ? For more information about Télécom ParisTech see [1]
-- How to apply Applications are to be sent to Chloé Clavel [6], Giovanna Varni [7] and Slim Essid [8] by email (using <firstname.lastname>@telecom-paristech.fr)
The application should be formatted as a single pdf file and should include: ? A complete and detailed curriculum vitae ? A letter of motivation ? Academic records of the last two years ? The names and addresses of two referees
[1] http://www.tsi.telecom-paristech.fr [2] https://www.ltci.telecom-paristech.fr/?lang=en [3] http://www.tsi.telecom-paristech.fr/en/ [5] http://www.tsi.telecom-paristech.fr/aao/en/ [6] https://clavel.wp.mines-telecom.fr/ [7] http://sites.google.com/site/gvarnisite/ [8] http://www.telecom-paristech.fr/~essid
| ||||||||||||
6-46 | (2017-12-16) Position at INA, Bry/Marne, France L’Institut national de l’audiovisuel (INA), entreprise publique audiovisuelle et numérique, collecte, sauvegarde et transmet le patrimoine audiovisuel français. Dans une démarche d’innovation tournée vers les usages, l’INA valorise ses contenus et les partage avec le plus grand nombre : sur ina.fr pour le grand public, sur inamediapro.com pour les professionnels, à l’InaTHÈQUE pour les chercheurs. L’institut développe ainsi des offres et des services afin de se rapprocher de ses usagers et clients, en France comme à l’international.
Son département Recherche et Innovation soutient une culture de l’innovation forte et ambitieuse. Notre technologie Ina-Signature (technologie de « fingerprint ») – issue de la R&D de l’Ina - a su s’imposer auprès de clients renommés, grâce à une stratégique axée sur la performance et la qualité. Notre offre continue à évoluer, avec la démocratisation du SAAS (software as a service) et du CLOUD.
Dans le cadre de votre mission, rattaché/e au Chef du service de la Recherche, vous garantissez la conception, la mise en oeuvre, l'intégration ou l'adaptation des technologies d’apprentissage automatique, d’analyse et de fusion de données dans le cadre des projets de Recherche pour l’expérimentation de nouveaux usages de valorisation des contenus.
A ce titre, vous serez en charge de :
1 – Effectuer de la Recherche scientifique et technologique - Définir les axes de recherche et développement liés à cette thématique ; - Concevoir, implémenter, tester, évaluer des outils technologiques innovants dans le cadre des usages existants ou pressentis de l’Institut ; - Collaborer avec l’ensemble des acteurs internes et externes du département ; - Participer à la stratégie de recherche et développement du service ; - Encadrer des stagiaires et à terme des doctorants ; - Rédiger ou participer à la rédaction d’articles scientifiques et présenter ces articles dans des colloques ; - Démontrer les travaux de recherche lors de colloques, séminaire ou salons : - Participer à la rédaction des documents liés à l’activité (rapports d’activité, livrables des projets en particulier).
2 – Assurer une R&D au service de l’Institut - Proposer, préparer, coordonner, participer à des projets de Recherche et Développement internes en lien avec les services opérationnels ; - Proposer, piloter, participer à des actions de concertation et de réflexion internes et groupes de travail.
3 – Réaliser des partenariats - Proposer, préparer, coordonner, participer à des projets de Recherche et Développement collaboratifs, nationaux ou internationaux en lien avec des partenaires académiques, institutionnels ou industriels ; - Proposer, coordonner, participer à des instances de coopération scientifique et technologique (COMUE, Pôles de compétitivité, Groupes de recherche).
4 – Collaborer au management fonctionnel - Participer à la coordination du service (réunions de coordination) ; - Participer aux tâches de gestion des ressources informatiques et techniques du service ; - Participer à la vie du service (réunions de service, suivis d’activité, rapports).
Profil : Vous justifierez d'un doctorat dans le domaine de l’apprentissage automatique et/ou de l’analyse de données ou d'un parcours professionnel admis en équivalence.
Complété de compétences en : - Maîtrise et expérience dans le(s) domaine(s) suivants : apprentissage automatique (Deep Learning), analyse et fusion de données, analyse de l’image et/ou de l’audio, développement informatique - Bonne pratique en recherche académique et/ou industrielle ; - Pratique en publications scientifiques ; - Bonne connaissance et pratique de projets collaboratifs ; - Connaissance du paysage audiovisuel français ; - Connaissance du monde académique ; - Maîtrise des outils bureautiques ; - Intérêt pour le monde de l’audiovisuel et des médias ; - Intérêt pour les Sciences Humaines et Sociales et les Humanités Numériques
Des qualités d’analyse et de synthèse, de créativité et d’imagination, de force de proposition, relationnelles et d’esprit d’équipe seront vos meilleurs atouts pour réussir dans le poste.
Modalités du poste : - Contrat : CDI - Statut : Cadre - Poste à pourvoir : au plus vite - Salaire : selon expérience - Clôture de la consultation : 31 janvier 2018 - Contact : jcarrive@ina.fr - Localisation géographique : Bry S/Marne (94)
| ||||||||||||
6-47 | (2017-12-16) Post-doc position at Uniklinik RWTH Aachen (Germany) We are looking at the Uniklinik RWTH Aachen (Germany) for a postdoctoral researcher in
| ||||||||||||
6-48 | (2017-12-13) PhD position in Conversational systems and Social robotics, KTH, Stockholm, Sweden PhD position in Conversational systems and Social robotics, KTH, Sweden KTH Royal Institute of Technology in Stockholm has grown to become one of Europe?s leading technical and engineering universities, as well as a key centre of intellectual talent and innovation. We are Sweden?s largest technical research and learning institution and home to students, researchers and faculty from around the world. We are looking for a doctoral student that will work on situated spoken interaction between humans and robots, under the supervision of Assoc. Prof. Gabriel Skantze, at the Department of Speech Music and Hearing. A central research question will be how social robots should adapt their conversational behavior to the users' level of attention, understanding and engagement. This means that the robot must be able to monitor gaze and feedback behaviour from the user, and then for example adjust the pace of information delivery, in real time. The work will involve implementation of components for conversational systems, collecting data and doing experiments with users interacting with the system, and using this data to build models of the users' behaviours. Applicants should have a Master degree (or similar) in a subject relevant for the research, such as computer science, language technology, or cognitive science. Applicants are expected to have good skills in programming, and knowledge in either experimental methods and statistics, or machine learning. Applicants must be strongly motivated for doctoral studies, possess the ability to work independently and perform critical analysis, and possess good levels of cooperative and communicative abilities. Good command of English, in writing and speaking, is a prerequisite for presenting research results in international periodicals and at conferences. We also expect applicants to have a deep interest in spoken language interaction between humans and between humans and machines. The position is mainly a research position for 4-5 years, with a small fraction of departmental duties (e.g. teaching). The starting date is open for discussion, though ideally we would like the successful candidate to start as soon as possible. For more information, see: https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:178626/where:4/
| ||||||||||||
6-49 | (2017-12-13) 2 funded PhD positions in interactive virtual characters and social robots at KTH, Stockholm, Sweden ** 2 funded PhD positions in interactive virtual characters and social robots at KTH, Sweden**
Embodied Social Agents Lab
KTH Royal Institute of Technology
Stockholm, Sweden
Deadline: 15th January 2018
ABOUT KTH
KTH Royal Institute of Technology in Stockholm has grown to become one of Europe?s leading technical and engineering universities, as well as a key center of intellectual talent and innovation. We are Sweden?s largest technical research and learning institution and home to students, researchers and faculty from around the world. Our research and education covers a wide area including natural sciences and all branches of engineering, as well as in architecture, industrial management, urban planning, history and philosophy.
The Embodied Social Agents Lab (http://www.csc.kth.se/~chpeters/ESAL/) led by Dr. Christopher Peters aims to develop virtual characters and other systems capable of interacting socially with humans for real-world application to areas such as education. The lab is already involved in a number of local and international initiatives involving virtual characters, social robots and education. It is based out of the Visualization Studio (VIC) at KTH, a research, teaching and dissemination resource with some of the most advanced interactive visualization technologies in the world, supporting platforms for interacting with sophisticated virtual characters.
JOB DESCRIPTION
Two PhD positions are available in the area of interactive virtual characters and social robots for application to education. Research in this area brings together multidisciplinary expertise to address new challenges and opportunities in the area of virtual characters, based on real-time computer graphics and animation techniques, to investigate multimodal and natural interaction for both individuals and groups, multimodal generation of expressions, individualization of behaviour and effects of embodiment (appearance, virtual versus physical objects). Applications are the design of interactive virtual and physical systems for educational purposes.
The topics to be pursued respectively in the PhDs are:
1. Compliant Small Group Behaviour (ref: ESR5)
Develop socially compliant behaviours allowing agents to join and leave free-standing formations based on their varying roles as teachers, teaching assistants and learners in pedagogical scenarios. Investigate the impact of variations in the artificial behaviour of agents on the efficacy of pedagogical approaches and potential for application to mobile robots through virtual replicas.
2. Impact of Appearance Customisation on Interaction (ref: ESR15)
Investigate technological approaches for customising the appearances and behaviours of avatars (user controlled virtual characters and robot replicas) in relation to their users and assess the impact on interactions during learning scenarios.
Both of the PhDs involve crossovers between virtual and augmented reality, virtual characters and mobile social robots and take place within the Horizon 2020 Marie Sklodowska Curie European Training Network ANIMATAS.
ANIMATAS will establish a leading European Training Network (ETN) devoted to the development of a new generation of creative and critical research leaders and innovators who have a skill-set tailored for the creation of social capabilities necessary for realising step changes in the development of intuitive human-machine interaction (HMI) in educational settings. 15 early-stage researcher (ESR) positions are available within ANIMATAS.
The successful candidates will participate in the network?s training activities offered by the European academic and industrial participating teams. PhD students will have the opportunity to work with the partners of the ANIMATAS project, such as Uppsala University, Jacobs University Bremen, Institut Mines-Télécom, University of Wisconsin-Madison, Pierre et Marie Curie University and Softbank Robotics, with possible opportunities for secondments at these institutions according to the ESR.
QUALIFICATIONS
The candidates must have an MSc degree in computer science or related areas relevant to the PhD topics. Good programming skills are required. A background in computer graphics and animation techniques or similar areas is appreciated. The PhD positions are highly interdisciplinary and require an understanding and/or interest in psychology and social sciences. The applicant should have excellent communication skills and be motivated to work in an interdisciplinary environment involving multiple stakeholders across academia, industry and education. An excellent level of written and spoken English is essential.
Read more about eligibility requirements at this link: http://animatas.isir.upmc.fr
The positions are for four years.
HOW TO APPLY
To apply, candidates must submit their CV, a letter of application, two letters of reference and academic credentials to the ANIMATAS recruitment committee: Mohamed Chetouani (network coordinator), Ana Paiva and Arvid Kappas at contact-animatas@listes.upmc.fr, and to the main supervisor of the research project of interest (Christopher Peters, chpeters@kth.se). All applications should be made in English.
Please include the keyword ?ANIMATAS? somewhere in the subject line and specify which project you are applying for (ESR5 or ESR15).
The application deadline is 15th January 2018
Information about the positions can be provided by Dr. Christopher Peters, chpeters@kth.se
| ||||||||||||
6-50 | (2018-01-09) Two postdoc positions at IDIAP, Martigny, Switzerland ===========================================
| ||||||||||||
6-51 | (2018-01-10) Postdocs at Monash University, Melbourne, Australia The Faculty of Information Technology (https://www.monash.edu/it) at Monash University in Melbourne Australia is establishing a new group in HCI and creative technologies. We invite accomplished and creative PhDs to apply for a 3-year postdoctoral fellowship in multimodal interfaces and behavior analytics. The selected candidate will join a rapidly expanding multidisciplinary group with expertise in areas such as mobile and multimodal-multisensor interfaces, agent-based conversational interfaces, brain-computer and adaptive interfaces, wearable and contextually-aware personalized interfaces, education and health interfaces, data analytics for predicting user cognition and health status, and other topics. This position involves research on predicting user cognition and health status, based on analysis of different modalities (e.g., speech, writing, images, sensors) during naturally occurring activities. These analyses involve exploring predictive patterns at the signal, activity pattern, lexical, and/or transactional levels. The ideal candidate would be an initiating researcher with a strong publication record who is interested in pioneering in emerging research areas. He/she would have an interest in developing new technologies to identify users? cognitive and health status, and using this information to develop personalized and adaptive interfaces that promote learning, performance, and health. Requirements: ? PhD in computer science, engineering, information sciences, cognitive or linguistic sciences, or related field ? Training in HCI, multimodal interfaces, data science and analytics, modeling human behavior & communication ? Experience collecting and analyzing speech, images, handwriting, and/or other sensor data ? Experience applying machine learning/deep learning, empirical/statistical, linguistic, or hybrid analysis methods ? Interest in human cognition and educational technologies, and/or health and mental health technologies ? Strong interpersonal, teamwork, communication and writing skills ? Ability to work with diverse partners? domain experts (teachers, clinicians), industry, undergraduate/graduate students ? Prefer candidate with 2-3 years post-PhD research or work experience HCI Group: The HCI group designs, builds, and evaluates state-of-the-art interface technologies. Our multidisciplinary interests span computer science and engineering, cognitive and learning sciences, communications, health, media design, and other topics. We are interested in applications such as health, education, communications, personal assistance, and digital arts. The HCI group has partnerships with CSIRO-Data61 and industry. The HCI area director is Dr. Sharon Oviatt, an ACM Fellow and international pioneer in human-centered, mobile, and multimodal interfaces (see https://www.monash.edu/it/our-research/graduate-research/scholarship-funded-phd-research-projects/projects/human-centred-mobile-and-multimodal-interfaces) Monash is Australia?s largest university, and ranks in the top 60 universities worldwide, with Computer and Information Systems rated in the top 70 worldwide (QS World University rankings 2018). In addition to growing rapidly in human-centered computing, software, and cyber-security, it includes data science and machine learning, artificial intelligence and robotics, computational biology, social computing, and basic computer science. Experimental Labs & Design Spaces: The university has made recent strategic investments in facilities for prototyping innovative concepts, collecting and analyzing data, and displaying digital installations and interactive media?including sensiLab (supporting tangible, wearable, augmented and virtual reality, multimodal-multimedia, maker-space), Immersive Visualization platform and Analytics lab, the Centre for Data Science, and the ARC Centre of Excellence on Integrative Brain function (pioneering new multimodal imaging techniques for data exploration). The university currently is investing in HCI group facilities for prototyping and developing new mobile, multimodal and multisensor interfaces, analyzing human multimodal interaction (e.g., whole-body activity, speech), and predicting users? cognitive and health status. Melbourne Area: Melbourne recently has been rated the #1 city worldwide for quality of life (see Economist & Guardian, http://www.economist.com/blogs/graphicdetail/2016/08/daily-chart-14 and https://www.theguardian.com/australia-news/2016/aug/18/melbourne-wins-worlds-most-liveable-city-award-sixth-year-in-a-row), with excellent education, healthcare, infrastructure, low crime, and exceptional cuisine, cultural activities, and creative design. The regional area is renowned for its dramatic coastline, extensive parks, exotic wildlife, and Yarra Valley wine region. Position & Compensation: This position is full-time for 3 years, with competitive salary (Academic level B-6, $119,683 AUD) and benefits, including 17% superannuation retirement fund, health insurance options, relocation, and seed funds for equipment and travel. Start date is negotiable after April 1, 2018. For enquiries, contact Oviatt@incaadesigns.org. To apply: To submit an online application: http://careers.pageuppeople.com/513/cw/en/job/571150/research-fellow-multimodal-interfaces-behaviour-analytics Required application materials include: (1) cover letter (indicating date of availability); (2) current CV with publication list, research and teaching interests, and 3 references with email/phone contact; (3) graduate transcripts; and (4) three representative publications. Monash has a Women in IT Program, and participates in the Athena Swan Charter to enhance gender equality. We welcome female, minority and international applicants.
| ||||||||||||
6-52 | (2018-01-10) Postdocs at Monash University, Melbourne, Australia The Faculty of Information Technology (https://www.monash.edu/it) at Monash University in Melbourne Australia is establishing a new group in HCI and creative technologies, with openings for exceptionally accomplished, creative, and energetic faculty at all levels. Selected candidates will have the opportunity to join a rapidly growing group with expertise in state-of-the-art areas of human-centered computer interfaces. We are especially interested in adding faculty in these preferred areas: ( (1) Wearable, contextually-aware and personalized interfaces (2) (2) Mobile and multimodal-multisensor interfaces, including fusion-based ones (3) (3) Data analytics for predicting user emotion, cognition, and health status (4) (4) Agent-based conversational dialogue interfaces (5) (5) Brain-computer and adaptive interfaces
Outstanding applicants in other areas of human-centered interfaces also are encouraged to apply, such as image recognition and data visualization. Depending on area, the candidate will be expected to have strong skills in methodology (empirical/statistical, machine learning, HCI design and analysis), signal-processing, linguistic analysis and language processing, or system architecture and software development. HCI Group: The HCI group designs, builds, and evaluates state-of-the-art interface technologies. Our multidisciplinary interests span computer science and engineering, cognitive and learning sciences, communications, medicine and health, media design, and other topics. Our work is based on empirical science, statistics, deep learning and data analytics, and diverse HCI methods. We are interested in applications in many areas, such as health, education, communications, personal assistance, robotics, automotive, and digital arts. The HCI group has partnerships with CSIRO-Data61, and an expanding collection of industry partners. The HCI area director is Dr. Sharon Oviatt, an ACM Fellow and international pioneer in human-centered, mobile, and multimodal interfaces (see https://www.monash.edu/it/our-research/graduate-research/scholarship-funded-phd-research-projects/projects/human-centred-mobile-and-multimodal-interfaces) Monash is Australia?s largest university and ranks in the top 60 universities worldwide, with Computer and Information Systems rated in the top 70 (QS World University rankings 2018). In addition to growing rapidly in human-centered computing, software, and cyber-security, it includes data science and machine learning, artificial intelligence and robotics, computational biology, social computing, creative technologies and digital humanities, and basic areas of computer science. Experimental Labs & Design Spaces: The university has made recent strategic investments in facilities for prototyping innovative concepts, collecting and analyzing data, and displaying digital installations and interactive media?including sensiLab (supporting tangible, wearable, augmented and virtual reality, multimodal-multimedia, maker-space), Immersive Visualization platform and Analytics lab, Centre for Data Science, and ARC Centre of Excellence on Integrative Brain function (pioneering new multimodal imaging techniques for data exploration). The university currently is investing in HCI group facilities for prototyping and developing new mobile, multimodal and multisensory interfaces, capturing and analyzing human multimodal interaction (e.g., whole-body activity, speech), and predicting users? cognitive and health status. Melbourne Area: Melbourne recently has been rated the #1 city worldwide for quality of life (see Economist & Guardian, http://www.economist.com/blogs/graphicdetail/2016/08/daily-chart-14 and https://www.theguardian.com/australia-news/2016/aug/18/melbourne-wins-worlds-most-liveable-city-award-sixth-year-in-a-row), with excellent education, healthcare, infrastructure, low crime, and exceptional cuisine, cultural activities, and creative design. The regional area is renowned for its dramatic coastline, extensive parks, exotic wildlife, and Yarra Valley wine region. Requirements: Interested applicants should have a PhD in computer science, information sciences, cognitive or linguistic sciences, or related field, and in most cases several years of post-PhD research or work experience. All candidates must have a strong publication record in top conferences and journals, excellent teamwork and communication/writing skills, and teaching/mentoring experience. Evidence of grants and industry partnerships is preferred. Monash has a Women in IT Program, and participates in the Athena Swan Charter to enhance gender equality in STEM disciplines. We especially welcome talented female, minority, and international applicants. Position and Compensation: All positions are full-time for 12 months a year, with competitive salary and benefits (see: http://adm.monash.edu.au/enterprise-agreements/academic-professional-2014/s1-academic-salary-rates.html), including 17% superannuation retirement fund, health insurance options, relocation allowance, and generous start-up package with reduced teaching. The academic year begins in late Feb. 2018, with semester 2 starting late July, but start date is negotiable. For North American applicants, note that a 'Lecturer (level B)' is an Assistant Professor, 'Senior Lecturer (level C)? is an Associate Professor, and ?Associate Professor (level D)? is a Professor. For enquiries, contact Oviatt@incaadesigns.org. To apply: To submit an online application: http://careers.pageuppeople.com/513/cw/en/job/571151/academic-opportunities-in-human-computer-interaction-fit Required application materials include: (1) cover letter (indicating application area 1-5, planned research for the near future, and date of availability); (2) current CV with publication list, research and teaching interests, and 3-5 references with email/phone contact; (3) three representative publications. For more information on the Faculty of IT?s main research areas and vigorous recruitment plans to add 50 new faculty, see https://www.monash.edu/it/about-us/recruiting-exceptional-academics This announcement represents one area of expansion.
| ||||||||||||
6-53 | (2018-01-10) Postdocs at Monash University, Melbourne, Australia The Faculty of Information Technology (https://www.monash.edu/it) at Monash University in Melbourne Australia has two graduate scholarships for exceptionally initiating and talented students to pursue a PhD in HCI and creative technologies. The selected students will have the opportunity to join an expanding HCI group with expertise in state-of-the-art areas such as: human-centered interfaces, mobile and wearable interfaces, natural multimodal interfaces (speech, writing, images, touch), agent-based conversational interfaces, multimodal-multisensor interfaces, data analytics for predicting user cognition and health, and brain-computer and adaptive interfaces. Successful PhD student applicants would conduct research on topics related to those outlined above in the HCI group. Most applicants would have a background in Computer and Information Systems, HCI, Interaction or Media Design, or a multidisciplinary topic related to their PhD interests such as Psychology, Cognitive Science, Neuroscience, Medicine and Health Sciences, Linguistics, or Engineering. Dr. Sharon Oviatt, director of the HCI area, will provide supervision. She is an internationally known pioneer in human-centered, mobile, and multimodal interfaces, and an ACM Fellow (see https://www.monash.edu/it/our-research/graduate-research/scholarship-funded-phd-research-projects/projects/human-centred-mobile-and-multimodal-interfaces) HCI Group & University: The HCI group designs, builds, and evaluates state-of-the-art computer interface technologies. It has multidisciplinary interests spanning computer science and engineering, cognitive and learning sciences, communications, medicine and health, media design, and other topics. The group?s research is based on empirical science, statistics, deep learning and data analytics, and HCI methods. We are interested in applications in many areas, such as health, education, communications, personal assistance, robotics, automotive, and digital arts. Monash is Australia?s largest university, and ranks in the top 60 universities worldwide, with Computer and Information Systems rated in the top 70 (QS World University rankings 2018). It is growing rapidly in human-centered interfaces, software, and cyber-security, but also includes data science and machine learning, artificial intelligence and robotics, computational biology, social computing, digital humanities, and traditional computer science. Experimental Labs & Design Spaces: The university has made recent strategic investments in facilities for prototyping innovative concepts, collecting and analyzing data, and displaying digital installations and interactive media?including sensiLab (tangible, wearable, augmented and virtual reality, multimodal-multimedia, maker-space), Immersive Visualization platform and Analytics lab, the Centre for Data Science, and the ARC Centre of Excellence on Integrative Brain function (pioneering new multimodal imaging techniques for data exploration). The university currently is investing in HCI group facilities for prototyping and developing new mobile, multimodal, and multisensory interfaces, analyzing human multimodal interaction (e.g., whole-body activity, speech, writing), and predicting users? cognitive and health status. Melbourne Area: Melbourne recently has been rated the #1 city worldwide for quality of life (see Economist & Guardian, http://www.economist.com/blogs/graphicdetail/2016/08/daily-chart-14 and https://www.theguardian.com/australia-news/2016/aug/18/melbourne-wins-worlds-most-liveable-city-award-sixth-year-in-a-row), with excellent education, healthcare, infrastructure, low crime, and exceptional cuisine, cultural activities, and creative design. The regional area is renowned for its dramatic coastline, extensive parks, exotic wildlife, and Yarra Valley wine region. Scholarship Support: The funded PhD positions are for full-time study at the Monash Caufield campus in Melbourne, to begin during the 2018 academic year. They include $26,682 AUD scholarship (non-taxable), relocation allowance ($1,000), tuition waiver and Overseas Student Health Cover for international students. Full-time Monash Graduate Scholarships for PhD students are for 3 to 3.5 years. PhD students may be eligible for other Monash scholarships, or for supplements for data collection, publications, and conference travel. Monash has a Women in IT Program and scholarships for female students. We especially welcome female, minority and international applications. For further information, see https://www.monash.edu/__data/assets/pdf_file/0009/704763/2017-stipend-conditions-of-award-1.pdf and https://www.monash.edu/graduate-research/future-students/support, or email Dr. Sharon Oviatt (with ?PhD Student Application? in header) at: Oviatt@incaadesigns.org Scholarship Requirements: To be competitive, applicants should have first-class honors (H1), or the equivalent grade of 80% or above. Research experience and publications also are considered, so students with lower grades but strong evidence of research aptitude and experience are encouraged to apply. To apply: For information about required application materials, and to submit an application online by the deadline, see https://www.monash.edu/graduate-research/future-students/apply and https://www.monash.edu/graduate-research/contact-us/faqs/how-to-apply. The academic term 1 begins in late February, and term 2 in late July. Graduate applications with scholarship support are due March 31 or August 31 for international students, and May 31 or Oct. 31 for domestic Australian students.
| ||||||||||||
6-54 | (2018-01-11) Postdoc position at IDIAP, Martigny, Switzerland We have a new opening for a post-doctoral researcher at Idiap Research Institute. It is
| ||||||||||||
6-55 | (2018-01-18) (SENIOR) SPEECH SCIENTIST at Voicebox, München, Germany (SENIOR) SPEECH SCIENTIST
| ||||||||||||
6-56 | (2018-01-18) 3 permanent(indefinite tenure) faculty positions at Telecom ParisTech, Paris, France Telecom ParisTech has three new permanent(indefinite tenure) faculty positions:
| ||||||||||||
6-57 | (2018-01-20) Machine learning Software Engineer, Adobe Research - Speech Recognition, San Jose, CA,USA Machine learning Software Engineer, Adobe Research - Speech Recognition
| ||||||||||||
6-58 | (2018-01-25) 2018 PHD RESEARCH FELLOWSHIPS at University of Trento , Italy 2018 PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech)
| ||||||||||||
6-59 | (2018-01-25) PhD student in Robot-assisted Language Learning at KTH Royal Institute of Technology, Stockholm, Sweden PhD student in Robot-assisted Language Learning at KTH Royal Institute of Technology, Stockholm, Sweden
Ending January 31st 2018.
Olov Engwall
Professor in Speech Communication
| ||||||||||||
6-60 | (2018-01-26) Poste MCF à l'ENSIMAG, Grenoble, France Poste MCF à l'ENSIMAG.
Ecole de rattachement : ENSIMAG
Site web de l?école : http://ensimag.grenoble-inp.fr/
Contact de l?école : Jean-Louis.Roch@grenoble-inp.fr, Olivier.Francois@grenoble-inp.fr
Profil d?enseignement :
L?Ensimag recrute un maître de conférences en mathématiques appliquées ou en informatique
afin de développer les enseignements d?apprentissage statistique, d?intelligence artificielle, de
visualisation de données, de calcul haute performance ou de « big data ». Le dossier de
candidature devra faire apparaître le caractère ?interdisciplinaire? du candidat, sa capacité à
prendre des responsabilités au sein de la structure, ainsi qu?une liste conséquente de travaux
ou publications en relation avec une ou plusieurs branches de la science des données. Outre la
formation aux sciences des données (synthèse de programmes à partir de données, aide à la
décision), la personne recrutée devra s?investir dans les enseignements du tronc commun
Ensimag (1ère année et environ 75% des filières de la 2ème année) qui constitue le socle de nos
élèves ingénieurs. Elle sera amenée à s'investir et prendre des responsabilités dans des
parcours de l?Ecole tels que le « mastère big data » ou le master « Data Science ». En
partenariat avec des industriels, la personne recrutée pourrait superviser l?organisation de «
challenges » et de « hackatons » afin d?enrichir les contacts de l?Ecole dans le domaine de
l?intelligence artificielle et des « big data ». En collaboration avec les équipes pédagogiques
concernées, elle devra s?impliquer dans le montage d?enseignements par projets et la
formation par le Numérique.
RECHERCHE
Laboratoire d?accueil : LIG / LJK
Site web du laboratoire : http://www.liglab.fr/
Contact du laboratoire : Eric Gaussier (eric.gaussier@imag.fr), LIG
Stéphane Labbé (stephane.labbe@imag.fr), LJK
Profil de recherche :
Le candidat effectuera ses recherches dans le domaine de l?intelligence artificielle ou de la
science des données, et montrera son ouverture aux différentes approches possibles dans ce
domaine. Les thématiques privilégiées sont l?apprentissage sur données complexes, structurées
ou non structurées, l?apprentissage profond et les réseaux de neurones et en particulier les
problématiques d?optimisation, de causalité, de capacité de généralisation et leur analyse
mathématique. Parmi les applications de l?apprentissage et de l?apprentissage profond, un
intérêt particulier est porté au traitement du signal et de l?image, à l?apprentissage de
représentation, à l?apprentissage avec des données multimédia, des données langagières pour
des problématiques issues du traitement du langage naturel, les thématiques de transparence
des mécanismes d?apprentissage, ainsi que les applications en biologie, santé, sciences
humaines, réseaux sociaux, physique, environnement, etc.
Le recrutement renforcera les liens entre le LIG et le LJK dans les domaines de la science des
données et de l?apprentissage automatique. Les deux laboratoires sont localisés sur le campus
de Saint Martin d?Hères et ont des collaborations actives, en particulier au sein de l?axe du
traitement de données et de connaissance à large échelle (équipes AMA, GETALP, MRIM,
SLIDE), des équipes PERVASIVE, TYREX du LIG et au sein du département Proba-Stat (équipes
DAO, SVH, MISTIS, FIGAL) et de l?équipe THOTH du LJK. Parmi les projets communs entre les
deux laboratoires, on peut également citer les problèmes de prédiction et de classification avec
des données structurées de type fonctionnelles, le transport optimal pour l?apprentissage, les
problèmes de parcimonie et de régularisation pour l?apprentissage multitâches et leur
résolution par des méthodes d?optimisation stochastique. La personne recrutée montrera sa
capacité à jouer un rôle actif dans les projets contractuels académiques (ANR, FUI, PFIA, EU...)
et industriels sur ces thèmes très porteurs.
ACTIVITES ADMINISTRATIVES
Spécificités du poste ou contraintes particulières :
Activités administratives liées aux fonctions de maître de conférences : responsabilités d?unité
d?enseignement, responsabilités de filières ou d?année.
Compétences attendues :
Savoir : Enseignement de l?informatique, de l?intelligence artificielle et de la science
des données
Savoir-faire : Pédagogie et responsabilités dans l?Ecole
Savoir-être : Travail en équipe
pdf
Intelligence artificielle, Science des données, Big data, Apprentissage
------------------------
Laurent Besacier
Professeur à l'Univ. Grenoble Alpes (UGA)
Laboratoire d'Informatique de Grenoble (LIG)
Membre Junior de l'Institut Universitaire de France (IUF 2012-2017)
Responsable équipe GETALP du LIG
Directeur de l'école doctorale (ED) MSTII
-------------------------
!! Nouvelles coordonnées !!: LIG
Laboratoire d'Informatique de GrenobleBâtiment IMAG 700 avenue Centrale Domaine Universitaire - 38401 St Martin d'Hères Pour tout contact concernant ED MSTII: passer par ed-mstii@univ-grenoble-alpes.fr
Nouveau tel: 0457421454
--------------------------
| ||||||||||||
6-61 | (2018-01-27) Post-doctoral researcher at Idiap Research Institute, Martigny, Switzerland Post-doctoral researcher at Idiap Research Institute. It is a joint position with the Swiss Center for Electronics and
| ||||||||||||
6-62 | (2018-01-27) Post Doctoral Position (12 months) at INRIA Nancy, France Post Doctoral Position (12 months) Natural language processing: automatic speech recognition system using deep neural networks without out-of-vocabulary words _______________________________________ - Location:INRIA Nancy Grand Est research center, France
- Research theme: PERCEPTION, COGNITION, INTERACTION
- Project-team: Multispeech
- Scientific Context:
More and more audio/video appear on Internet each day. About 300 hours of multimedia are uploaded per minute. In these multimedia sources, audio data represents a very important part. If these documents are not transcribed, automatic content retrieval is difficult or impossible. The classical approach for spoken content retrieval from audio documents is an automatic speech recognition followed by text retrieval.
An automatic speech recognition system (ASR) uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. New Proper Names (PNs) appear constantly, requiring dynamic updates of the lexicons used by the ASR. These PNs evolve over time and no vocabulary will ever contains all existing PNs. When a person searches for a document, proper names are used in the query. If these PNs have not been recognized, the document cannot be found. These missing PNs can be very important for the understanding of the document.
In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is how to model relevant proper names for the audio document we want to transcribe.
- Missions:
We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs). The purpose of this work is to design a methodology how to find and model a list of relevant OOV PNs that correspond to an audio document.
Assuming that we have an approximate transcription of the audio document and huge text corpus extracted from internet, several methodologies could be studied:
The proposed approaches will be validated using the ASR developed in our team.
Keywords: deep neural networks, automatic speech recognition, lexicon, out-of-vocabulary words.
- Bibliography [Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013. [Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013. [Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016. [Li2017] J. Li, G. Ye, R. Zhao, J. Droppo, Y. Gong , ?Acoustic-to-Word Model without OOV?, ASRU, 2017.
- Skills and profile: PhD in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python). - Additional information:
Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/
Additional links : Ecole Doctorale IAEM Lorraine
Deadline to apply: June 6th Selection results: end of June
Duration :12 of months. Starting date: between Nov. 1st 2018 and Jan. 1st 2019
The candidates must have defended their PhD later than Sept. 1st 2016 and before the end of 2018. The candidates are required to provide the following documents in a single pdf or ZIP file:
In addition, at least one recommendation letter from the PhD advisor should be sent directly by their author(s) to the prospective postdoc advisor.
Help and benefits:
| ||||||||||||
6-63 | (2018-01-27) PhD grant Natural language processing: adding new words to a speech recognition system using Deep Neural Networks, INRIA/LORIA, Nancy, France Natural language processing: adding new words to a speech recognition system using Deep Neural Networks
- Location: INRIA/LORIA Nancy Grand Est research center France
- Research theme:Perception, Cognition, Interaction
- Project-team: Multispeech
- Scientific Context:
Voice is seen as the next big field for computer interaction. The research company Gartner reckons that by 2018, 30% of all interactions with devices will be voice-based: people can speak up to four times faster than they can type, and the technology behind voice interaction is improving all the time. As of October 2017, Amazon Echo is present in about 4% of American households. Voice assistants are proliferating in smartphones too: Apple?s Siri handles over 2 billion commands a week, and 20% of Google searches on Android-powered handsets in America are done by voice input. The proper nouns (PNs) play a particular role: they are often important to understand a message and can vary enormously. For example, a voice assistant should know the names of all your friends; a search engine should know the names of all famous people and places, names of museums, etc. An automatic speech recognition system uses a lexicon containing the most frequent words of the language and only the words of the lexicon can be recognized by the system. It is impossible to add all possible proper names because there are millions proper names and new ones appear every day. A competitive solution is to dynamically add new PNs into the ASR system. The idea is to add only relevant proper names: for instance if we want to transcribe a video document about football results, we should add the names of famous football players and not politicians. In this study, we will focus on the problem of proper names in automatic recognition systems. The problem is to find relevant proper names for the audio document we want to transcribe. To select the relevant proper names, we propose to use an artificial neural network.
- Missions:
We assume that in an audio document to transcribe we have missing proper names, i.e. proper names that are pronounced in the audio document but that are not in the lexicon of the automatic speech recognition system; these proper names cannot be recognized (out-of-vocabulary proper names, OOV PNs) The goal of this PhDThesis is to find a list of relevant OOV PNs that correspond to an audio document and to integrate them in the speech recognition system. We will use a Deep neural network to find relevant OOV PNs The input of the DNN will be the approximate transcription of the audio document and the output will be the list of relevant OOV PNs with their probabilities. The retrieved proper names will be added to the lexicon and a new recognition of the audio document will be performed.
During the thesis, the student will investigate methodologies based on deep neural networks [Deng2013]. The candidate will study different structures of DNN and different representation of documents [Mikolov2013]. The student will validate the proposed approaches using the automatic transcription system of radio broadcast developed in our team.
- Bibliography:
[Mikolov2013] Mikolov, T., Chen, K., Corrado, G. and Dean, J. ?Efficient estimation of word representations in vector space?, Workshop at ICLR, 2013.
[Deng2013] Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y. and Acero A. ?Recent advances in deep learning for speech research at Microsoft?, Proceedings of ICASSP, 2013.
[Sheikh2016] Sheihk, I., Illina, I., Fohr, D., Linarès, G. ?Improved Neural Bag-of-Words Model to Retrieve Out-of-Vocabulary Words in Speech Recognition?. Interspeech, 2016.
- Skills and profile: Master in computer science, background in statistics, natural language processing, experience with deep learning tools (keras, kaldi, etc.) and computer program skills (Perl, Python). - Additional information:
Supervision and contact: Irina Illina, LORIA/INRIA (illina@loria.fr), Dominique Fohr INRIA/LORIA (dominique.fohr@loria.fr) https://members.loria.fr/IIllina/, https://members.loria.fr/DFohr/ Additional links: Ecole Doctorale IAEM Lorraine
Duration: 3 years Starting date: between Oct. 1st 2018 and Jan. 1st 2019
Deadline to apply : May 1st 2018
The candidates are required to provide the following documents in a single pdf or ZIP file:
In addition, one recommendation letter from the person who supervises(d) the Master thesis (or research project or internship) should be sent directly by his/her author to the prospective PhD advisor.
| ||||||||||||
6-64 | (2018/02/01) Junior Linguist (French), Paris, France Junior Linguist [French]
Job Title:Junior Linguist [French] Linguistic Field(s):Phonetics, Phonology, Morphology, Semantics, Syntax, Lexicography, NLP Location:Paris, France Job description:The role of the Junior Linguist is to annotate and review linguistic data in French. The Junior Linguist will also contribute to a number of other tasks to improve natural language processing. The tasks include:
Minimum Requirements:
Desired Skills:
CV + motivation letter : maroussia.houimli@adeccooutsourcing.fr
|