ISCApad #264 |
Wednesday, June 10, 2020 by Chris Wellekens |
6-1 | (2019-12-02) 2 postes d'enseignant-chercheur, Université Paris-Saclay, France 2 postes d'enseignant-chercheur (un PR et un MC) vont être mis au concours par
| ||||||||||
6-2 | (2019-12-03) Ph studentships, University of Glasgow, UK The School of Computing Science at the University of Glasgow is offering studentships and excellence bursaries for PhD study. The following sources of funding are available:
* EPSRC DTA awards: open to UK or EU applicants who have lived in the UK for at least 3 years (see https://epsrc.ukri.org/skills/students/help/eligibility/) - covers fees and living expenses * College of Science and Engineering Scholarship: open to all applicants (UK, EU and International) - covers fees and living expenses * Centre for Doctoral Training in Socially Intelligent Artificial Agents: open to UK or EU applicants who have lived in the UK for at least 3 years through a national competition – see https://socialcdt.org * China Scholarship Council Scholarship nominations: open to Chinese applicants – covers fees and living expenses * Excellence Bursaries: full fee discount for UK/EU applicants; partial discount for international applicants * Further scholarships (contact potential supervisor for details): open to UK or EU applicants
Whilst the above funding is open to students in all areas of computing science, applications in the area of Human-Computer Interaction are welcomed.
Please find below a list of Available supervisors in HCI and their research areas.
Available supervisors and their research topics: * Prof Stephen Brewster (http://mig.dcs.gla.ac.uk/): Multimodal Interaction, MR/AR/VR, Haptic feedback. Email: Stephen.Brewster@glasgow.ac.uk * Prof Matthew Chalmers (https://www.gla.ac.uk/schools/computing/staff/matthewchalmers/): mobile and ubiquitous computing, focusing on ethical systems design and healthcare applications. Email: Matthew.Chalmers@glasgow.ac.uk * Prof Alessandro Vinciarelli (http://www.dcs.gla.ac.uk/vincia/): Social Signal Processing. Email: Alessandro.Vinciarelli@glasgow.ac.uk * Dr Fani Deligianni (http://fdeligianni.site/): Characterising uncertainty, eye-tracking, EEG, bimanual teleoperations. Email: fadelgr@gmail.com * Dr Helen C. Purchase (http://www.dcs.gla.ac.uk/~hcp/): Visual Communication, Information Visualisation, Visual Aesthetics. Email: Helen.Purchase@glasgow.ac.uk * Dr Mohamed Khamis (http://mkhamis.com/): Human-centered Security and Privacy, Eye Tracking and Gaze-based Interaction, Interactive Displays. Email: Mohamed.Khamis@glasgow.ac.uk
The closing date for applications is 31 January 2020. For more information about how to apply, see https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents. This web page includes information about the research proposal, which is required as part of your application.
Applicants are strongly encouraged to contact a potential supervisor and discuss an application before the submission deadline.
| ||||||||||
6-3 | (2019-12-03) Poste de chercheur au LIMSI, Orsay, Paris, France Le LIMSI recrute un chercheur (CC) en traitement automatique des langues et traduction
| ||||||||||
6-4 | (2019-12-06) Stage de fin d’études d’Ingénieur ou de Master 2, INA, Bry-sur-Marne, France Segmentation et détection automatique des situations conflictuelles en interview politique Stage de fin d’études d’Ingénieur ou de Master 2 – Année académique 2019-2020 Mots clés : Machine Learning, Diarization, Humanités numériques, parole politique, expressivité Contexte L’Institut national de l’audiovisuel (INA) est un établissement public à caractère industriel et commercial (EPIC), dont la mission principale consiste à archiver et valoriser la mémoire audiovisuelle française (radio, télévision et web média). L’INA assure également des missions de recherche scientifique, de formation et de production. Ce stage s’inscrit le cadre du projet OOPAIP (Ontologie et outil pour l’annotation des interventions politiques). C’est un projet transdisciplinaire porté par l’INA et le CESSP (Centre européen de sociologie et de science politique) de l’Université Paris 1 Panthéon-Sorbonne. L’objectif est de concevoir de nouvelles approches pour élaborer des analyses détaillées, qualitatives et quantitatives des interventions politiques médiatisés en France. Une part du projet porte sur l’étude de la dynamique des interactions conflictuelles dans les interviews et débats politiques, ce qui nécessite une description fine et un large corpus afin de généraliser les modèles. Les verrous technologiques concernent la performance des algorithmes de segmentation en locuteurs et en styles de parole. L’amélioration de leur précision, l’ajout de la détection de parole superposée, de mesures de l’effort vocal et d’éléments expressifs, permettront d’optimiser le travail d’annotation manuel. Objectifs du stage Le stage vise principalement à l’amélioration de la segmentation automatique d’interviews politiques pour assister les travaux de recherche en science politique. La thématique de recherche correspondante que nous retiendrons est la mise en évidence des situations conflictuelles. Dans ce cadre, nous nous intéresserons notamment à la détection du brouhaha (parole superposée). De manière plus fine, nous aimerions pouvoir extraire des descripteurs du signal de parole corrélés au niveau de conflictualité des échanges, basés, par exemple, sur le niveau d’activation (niveau intermédiaire entre le signal et l’expressivité [Rilliard et al, 2018]) ou l’effort vocal [Liénard, 2019]. Le stage pourra s’appuyer initialement sur deux corpus totalisant 30 interviews politiques annotés finement en tours de paroles — dans le cadre du projet OOPAIP. Il débutera par la réalisation d’un état de l’art de la diarization (segmentation et regroupement en locuteurs [Broux et al., 2019]) et de la détection de la parole superposée [Chowdhury et al, 2019]. Il s’agira ensuite de proposer des solutions basées sur des frameworks récents pour améliorer la localisation des frontières de tours de parole, notamment lorsque la fréquence des changements de locuteurs est importante — le cas limite étant la situation du brouhaha. La seconde partie du stage se penchera sur une mesure plus fine du niveau conflictuel des échanges, via la recherche des descripteurs les plus pertinents et par la mise au point d’architecture d’apprentissage pour sa modélisation. Le langage de programmation utilisé dans le cadre de ce stage sera Python. Le stagiaire aura accès aux ressources de calcul de l’INA (serveurs et clusters), ainsi qu’à un desktop performant avec 2 GPU de génération récente. Valorisation du stage Différentes stratégies de valorisation des travaux du·de la stagiaire seront envisagées, en fonction du degré de maturité des travaux réalisés : • Diffusion des outils d’analyse réalisés sous licence open-source via le dépôt GitHub de l’INA : https://github.com/ina-foss • Rédaction de publications scientifiques Conditions du stage Le stage se déroulera sur une période de 4 à 6 mois, au sein du service de la Recherche de l’Ina. Il aura lieu sur le site Bry 2, situé au 18 Avenue des frères Lumière, 94360 Bry-sur-Marne. La·le stagiaire sera encadré·e par Marc Evrard (mevrard@ina.fr). Gratification : environ 550 Euros par mois. Profil recherché • Étudiant·e en dernière année d’un bac +5 dans le domaine de l’informatique et de l'IA. • Compétence en langage Python et expérience dans l’utilisation de bibliothèques de ML (Scikit-learn, TensorFlow, PyTorch). • Vif intérêt dans les SHS, les humanités numériques et les sciences politiques en particulier. • Capacité à réaliser une étude bibliographique à partir d’articles scientifiques rédigés en anglais. Bibliographie Broux, P. A., Desnous, F., Larcher, A., Petitrenaud, S., Carrive, J., & Meignier, S. (2018). “S4D: Speaker Diarization Toolkit in Python”. In Inter-speech 2018. Chowdhury, S. A., Stepanov, E. A., Danieli, M., Riccardi, G. (2019). “Automatic classification of speech overlaps: Feature representation and algo-rithms”, Computer Speech & Language, vol. 55, pp.145-167. Liénard, J.-S. “Quantifying vocal effort from the shape of the one-third octave long-term-average spectrum of speech” J. Acoust. Soc. Am. 146 (4), Oc-tober 2019. Rilliard, A., d’Alessandro, C & Evrard, M. (2018). Paradigmatic variation of vowels in expressive speech: Acoustic description and dimensional analysis. The Journal of the Acoustical Society of America, 143(1), 109–122.
| ||||||||||
6-5 | (2019-12-07) Stage à l'IRCAM, Paris, France Deep Disentanglement of Speaker Identity and Phonetic Content for Voice Conversion Dates : 01/02/2020 au 30/06/2020 Laboratoire : STMS Lab (IRCAM / CNRS / Sorbonne Université) Lieu : IRCAM – Analyse et Synthèse des Sons Responsables : Nicolas Obin, Axel Roebel Contact : Nicolas.Obin@ircam.fr, Axel.Roebel@ircam.fr Contexte : La conversion de l’identité de la voix consiste à modifier les caractéristiques d’une voix « source » pour reproduire les caractéristiques d’une voix « cible » à imiter, à partir d’une collection d’exemples de la voix « cible ». La tâche de conversion d’identité de la voix s’est largement popularisée ces dernières années avec l’apparition des « deep fakes », avec comme objectif de transposer les réussites réalisées dans le domaine de l’image au domaine de la parole. Ainsi, les lignes de recherche actuelles reposent sur des architectures neuronales comme les modèles séquence-à-séquence, les réseaux antagonistes génératifs (GAN, [Goodfellow et al., 2014]) et ses variantes pour l’apprentissage à partir de données non appareillées (Cycle-GAN [Kaneko and Kamaeoka, 2017] ou AttGAN [He et al., 2019]). Les challenges majeurs de la conversion d’identité comprennent la possibilité d’apprendre des transformation d’identité efficacement à partir de petites bases de données (qq minutes) et de séparer les facteurs de variabilité de la parole afin de modifier uniquement l’identité d’un locuteur sans modifier ou dégrader le contenu linguistique et expressif de la voix. Objectifs : Le travail effectué dans ce stage concernera l’extension du système de conversion neuronal de l’identité vocale actuellement développée dans le cadre du projet ANR TheVoice (https://www.ircam.fr/projects/pages/thevoice/). Le focus principal du stage sera d’intégrer efficacement l’information du contenu linguistique au système de conversion neuronal existant. Cet objectif passera par la réalisation des tâches suivantes : - Développement d’une représentation de l’information phonétique (par ex. sous forme de Phonetic PosteriorGrams [Sun et al., 2016]) et intégration au système de conversion actuel. - Application et approfondissement de techniques de « disentanglement » de l’identité du locuteur et du contenu phonétique pour l’apprentissage de la conversion [Mathieu et al., 2016 ; Hamidreza et al., 2019] - Evaluation des résultats obtenus par comparaison à des systèmes de conversion de l’état de l’art, sur des bases de référence comme VCC2018 ou LibriSpeech. Les problèmes abordés pendant le stage seront sélectionnés en début du stage après une phase d’orientation et une étude bibliographique. Les solutions réalisées au cours du stage seront intégrées au système de conversion d’identité de la voix de l’Ircam, avec possibilité d’exploitation industrielle et professionnelle. Par exemple, le système de conversion d’identité développé à l’Ircam a été exploité dans des projets de production professionnelle pour recréer des voix de personnalités historiques : le maréchal Pétain dans le documentaire « Juger Pétain » en 2012, et Louis de Funès dans le film « Pourquoi j’ai pas mangé mon père » de Jamel Debbouze en 2015. Le stage s’appuiera sur les connaissances de l’équipe Analyse et Synthèse des Sons du laboratoire STMS (IRCAM/CNRS/Sorbonne Université) en traitement du signal de parole et en apprentissage de réseaux de neurones, et sur une grande expérience en conversion d’identité de la voix [Villavicencio et al., 2009 ; Huber, 2015]. Compétences attendues : - Maîtrise de l’apprentissage automatique, en particulier de l’apprentissage par réseaux de neurones ; - Maîtrise du traitement du signal audio numérique (analyse temps-fréquence, analyse paramétrique de signaux audio, etc…) ; - Bonne maîtrise de la programmation Python et de l’environnement TensorFlow ; - Autonomie, travail en équipe, productivité, rigueur et méthodologie. Rémunération : Gratification selon loi en vigueur et avantages sociaux Date limite de candidature : 20/12/2019 Bibliographie : [Goodfellow et al., 2014] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial Networks,” arXiv:1406.2661 [cs, stat], 2014. [Hamidreza et al., 2019] Seyed Hamidreza Mohammadi, Taehwan Kim. One-shot Voice Conversion with Disentangled Representations by Leveraging Phonetic Posteriorgrams, Interspeech 2019. [He et al., 2019] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “Attgan: Facial attribute editing by only changing what you want.,” IEEE Transactions on Image Processing, vol. 28, no. 11, 2019. [Huber 2015] S. Huber, “Voice Conversion by modelling and transformation of extended voice characteristics”, Thèse Université Pierre et Marie Curie (Paris VI), 2015. [Kanekoa and Kameoka, 2017] TakuhiroKanekoandHirokazuKameoka,“Parallel-Data-Free Voice Conversion Using Cycle-Consistent Adversarial Net- works,” arXiv:1711.11293 [cs, eess, stat], 2017 [Mathieu et al., 2016] Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann LeCun. Disentangling factors of variation in deep representations using adversarial training, NIPS 2016. [Sun et al., 2016 ]Lifa Sun, Kun Li, Hao Wang, Shiyin Kang, and Helen Meng, “Phonetic posteriorgrams for many-to-one voice conversion without parallel data training,” in 2016 IEEE International Conference on Multimedia and Expo (ICME), 2016, pp. 1–6. [Villavicencio et al., 2009] Villavicencio, F., Röbel, A., and Rodet, X. (2009). Applying improved spectral modelling for high quality voice conversion. In Proc. of IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), pages 4285–4288. 17, 41, 45
| ||||||||||
6-6 | (2019-12-07) Assistant-e ingénieur-e en production, LPL, Aix en Provence, France
La campagne est ouverte jusqu'au 17 janvier mais l'examen des candidatures se fera au fil de l'eau. N'hésitez pas à diffuser cette information auprès des personnes potentiellement concernées.
| ||||||||||
6-7 | (2019-12-07) 1 year post-doc/engineer position at LIA, Avignon France 1 year post-doc/engineer position at LIA, Avignon France, in the Vocal Interaction Group Multimodal man-robot interface for social spaces
| ||||||||||
6-8 | (2019-12-05) Postdoctoral Fellowship,University of Connecticut Health,Farmington, CT, USA Postdoctoral Fellowship, Speech Processing in Noise University of Connecticut Health Location: Farmington, CT Start Date: January 2020, or thereafter Duration: Initially 1 year with potential for extension Salary: Depends on experience, based on NIH range: benefits include health care, retirement contributions, and paid leave for vacation, personal days, holidays and sickness. Application Process: Please send your resumé, a one-page cover letter that describes your research interests and experience, a list of publications (copies of most relevant - optional), and contact information for three references to Dr Insoo Kim (ikim@uchc.edu). A Postdoctoral Fellowship is available in the Division of Occupational Medicine, Department of Medicine, at the University of Connecticut Health to investigate algorithms for improving speech intelligibility in environmental noise. The work will involve simulating the noise of machines from known frequency spectra and creating speech-in-noise test files using MATLAB for replaying to subjects in listening tests. The test files may be processed electronically to improve intelligibility before the psychoacoustic testing. The position requires knowledge of, and practical experience with, speech or audio digital signal processing; proficiency with MATLAB and Simulink simulations, and; familiarity with psychoacoustic testing of speech intelligibility in noise, and with the development of embedded systems or digital signal processors. The Fellow will participate in on-going research projects involving speech processing. He/she will be responsible for implementing the algorithms for improving speech communication in noise, conducting all psychoacoustic tests used to establish proof-of-concept, and data analysis and interpretation. The Fellow will also have opportunities to supervise graduate and undergraduate students. Candidates should have good oral and written English communication skills, be capable of independent work as a part of a multi-disciplinary team, be able to work on multiple projects at the same time, publish results in academic journals and participate in grant proposal preparation. They should have a Ph.D. degree in Acoustics, Electrical, Computer, Biomedical Engineering, or a related field with appropriate experience. The initial appointment is for a period of one year with potential for further extension. The review of applications will start immediately and will continue until the position is filled.
| ||||||||||
6-9 | (2019-12-08) PhD sudentship, Utrecht University, The Netherlands The Social and Affective Computing group at the Utrecht University Department of Information and Computing Sciences is looking for a PhD candidate to conduct research on explainable and accountable affective computing for mental healthcare scenarios. The five-year position includes 70% research time and 30% teaching time. The post presents an excellent opportunity to develop an academic profile as a competent researcher and able teacher. Affective computing has great potential for clinician support systems, but it needs to produce insightful, explainable, and accountable results. Cross-corpus and cross-task generalization of approaches, as well as efficient and effective ways of leveraging multimodality are some of the main challenges in the field. Furthermore, data are scarce, and class-imbalance is expected. While addressing these issues, precision needs to be complemented by interpretability. Potential investigation areas include for example depression, bipolar disorder, and dementia. The PhD candidate is expected to bridge the research efforts in cross-corpus, cross-task multimodal affect recognition with explainable/accountable machine learning for the aim of efficient, effective and interpretable predictions on a data-scarce and sensitive target problem. The candidate is also expected to be involved in teaching activities within the department of Information and Computing Sciences. Teaching activities may include supporting senior teaching staff, conducting tutorials, and supervising student projects and theses. These activities will contribute to the development of the candidate's didactic skills. We are looking for candidates with:
The ideal candidate should express a strong interest in research in affective computing and teaching within the ICS department. The Department finds gender balance specifically and diversity in a broader sense very important; therefore women are especially encouraged to apply. Applicants are encouraged to mention any personal circumstances that need to be taken into account in their evaluation, for example parental leave or military service.
We offer an exciting opportunity to contribute to an ambitious and international education programme with highly motivated students and to conduct your own research project at a renowned research university. You will receive appropriate training, personal supervision, and guidance for both your research and teaching tasks, which will provide an excellent start to an academic career. The candidate is offered a position for five years (1.0 FTE). The gross salary starts at ?2,325 and increases to ?2,972 (scale P according to the Collective Labour Agreement Dutch Universities) per month for a full-time employment. Salaries are supplemented with a holiday bonus of 8% and a year-end bonus of 8.3% per year. In addition, Utrecht University offers excellent secondary conditions, including an attractive retirement scheme, (partly paid) parental leave and flexible employment conditions (multiple choice model). More information about working at Utrecht University can be found here. Application deadline is 01.01.2020.
| ||||||||||
6-10 | (2019-12-09) Postdoc , IRISA, Rennes, France IRISA (France) is looking for a 30-month postdoctoral researcher for topic of Natural Language Processing for Kids, starting in Spring 2020.
More details: http://www.irisa.fr/en/page/30-month-postdoctoral-researcher-natural-language-processing-kids
| ||||||||||
6-11 | (2019-12-15) PhD grant at the University of Glasgow, Scotland, UK The School of Computing Science at the University of Glasgow is offering studentships and excellence bursaries for PhD study. The following sources of funding are available: * EPSRC DTA awards: open to UK or EU applicants who have lived in the UK for at least 3 years (see https://epsrc.ukri.org/skills/students/help/eligibility/) - covers fees and living expenses Whilst the above funding is open to students in all areas of computing science, applications in the area of Human-Computer Interaction are welcomed. Please find below a list of Available supervisors in HCI and their research areas. Available supervisors and their research topics: * Prof Stephen Brewster (http://mig.dcs.gla.ac.uk/): Multimodal Interaction, MR/AR/VR, Haptic feedback. Email: Stephen.Brewster@glasgow.ac.uk The closing date for applications is 31 January 2020. For more information about how to apply, see https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents. This web page includes information about the research proposal, which is required as part of your application. Applicants are strongly encouraged to contact a potential supervisor and discuss an application before the submission deadline. Best regards, --
| ||||||||||
6-12 | (2019-12-19) Postdoc at Bielefeld University, Germany The Faculty of Linguistics and Literary Studies at Bielefeld University offers a full-time
research position (postdoctoral position, E13 TV-L, non-permanent) in phonetics
The Faculty of Linguistics and Literary Studies offers a full time post-doctoral position in phonetics for 3 years (German pay scale: E13). The Bielefeld phonetics group is well known for its research on phenomena in spontaneous interaction, prosody, multimodal speech and spoken human-machine interaction. Bielefeld campus offers a wide range of options for intra and interdisciplinary networking, and further qualification.
Your responsibilities: - conduct independent research in phonetics, with a visible focus on modeling or speech technology (65%). - teach 2 classes (3 hours=4 teaching units/week) per semester in the degree offered by the linguistics department, including the supervision of BA and MA theses and conducting exams (25%). - organizational tasks that are part of the self-administration of the university (10%).
Necessary qualifications:
- a Masters degree in a relevant discipline (e.g., phonetics, linguistics, computer science, computational linguistics) - a doctoral degree in a relevant discipline - a research focus in phonetics or speech technology - state-of-the-art knowledge in statistical methods or programming skills - knowledge in generating and analyzing speech data with state-of-the-art tools - publications - teaching experience - a co-operative and team oriented attitude - an interest in spontaneous, interactive, potentially multimodal data
Preferable qualifications:
- experience in the acquisition of third party funding
Remuneration
Salary will be paid according to Remuneration level 13 of the Wage Agreement for Public Service in the Federal States (TV-L). As stipulated in § 2 (1) sentence 1 of the WissZeitVG (fixed-term employment), the contract will end after three years, In accordance with the provisions of the WissZeitVG and the Agreement on Satisfactory Conditions of Employment, the length of contract may differ in individual cases. The employment is designed to encourage further academic qualification. In principle, these full-time position may be changed into a part-time position, as long as this does not conflict with official needs. Bielefeld University is particularly committed to equal opportunities and the career development of its employees. It offers attractive internal and external training and further training programmes. Employees have the opportunity to use a variety of health, counselling, and prevention programmes. Bielefeld University places great importance on a work-family balance for all its employees. Application Procedure For full consideration, your application should be received via either post (see postal address below) or email (a single PDF) document sent to alexandra.kenter@uni-bielefeld.de by January 8th, 2020. Please mark your application with the identification code: wiss19299. To apply, please provide the usual documents (CV including information about your academic education and degrees, professional experience, publications, conference contributions, and further relevant skills and abilities). The application can be written in German or English. Further information on Bielefeld University can be found on our homepage at www.uni-bielefeld.de. Please note that the possibility of privacy breaches and unauthorized access by third parties cannot be excluded when communicating via unencrypted e-mail. Information on the processing of personal data is available https://www.uni-bielefeld.de/Universitaet/Aktuelles/Stellenausschreibungen/2019_DS-Hinweise_englisch.pdf. Postal Address Bielefeld University, Faculty of Linguistics and Literary Studies, Prof. Dr. Petra Wagner, P.O. Box: 10 01 31, 33501 Bielefeld, Germany Contact Alexandra Kenter 0521 106-3662 alexandra.kenter@uni-bielefeld.de
| ||||||||||
6-13 | (2019-12-22) Postdoctoral Researcher, University of Toulouse Jean Jaures, France Postdoctoral Researcher ? Psycholinguistics, neurolinguistics, corpus linguistics, clinical linguistics
| ||||||||||
6-14 | (2019-12-24) Postdoc proposal, Grenoble, France Postdoc proposal Lexicon Free Spontaneous Speech Recognition using Sequence-to-Sequence Models. December 20, 2019 1 Postdoc Subject The goal of the project is to advance the state-of-the-art in spontaneous auto- matic speech recognition (ASR). Recent advances in ASR show excellent per- formances on tasks such as read speech ASR (Librispeech), TV shows (MGB challenge), but what about spontaneous communicative speech ? This postdoc project would leverage existing transcribed corpora (more than 300 hours) recorded in everyday communication (speech recordings inside a family, in a shop, during an interview, etc.). We will investigate lexicon free methods based on sequence-to-sequence ar- chitectures and analyze the representations learnt by the models. Research topics: End-to-end ASR models Spontaneous speech ASR Data augmentation for spontaneous language modelling Use of contextualized language models (such as BERT) for ASR re-scoring Analyzing representations learnt by ASR systems 2 Requirements We are looking for an outstanding and highly motivated postdoc candidate to work on this subject. Following requirements are mandatory: PhD degree in natural language processing or speech processing. Excellent programming skills (mostly in Python and deep learning frame- works). 1 Interest in speech technology and speech science Good oral and written communication in English (French is a plus while not mandatory) Ability to work autonomously and in collaboration with other team mem- bers and other disciplines 3 Work context Grenoble Alpes Univ. o puting facilities, as well as remarkable surroundings to explore over the week- ends. The postdoc project will be funded by the Grenoble Articial Intelligence Institute (MIAI). The candidate will work both at LIG-lab (GETALP team) and LIDILEM-lab. The duration of the postdoc is 18 months. 4 How to apply? Applications should include a detailed CV; a copy of the last diploma; at least two references (people likely to be contacted); a cover letter of one page; a one-page summary of the PhD thesis. Applications should be sent to lau- rent.besacier@imag.fr, solange.rossato@imag.fr and aurelie.nardy@univ-grenoble- alpes.fr. Applications will be evaluated as they are received: the position is open until it is lled.
| ||||||||||
6-15 | (2020-01-09) 12-month Postdoctoral research position at GIPSA Lab, Grenoble, France 12-month Postdoctoral research position in machine learning for neural speech decoding Place: GIPSA-lab (CNRS/UGA/Grenoble-INP) in collaboration with BrainTech laboratory (INSERM). Both laboratories are located on the same campus in Grenoble, France. Team: CRISSP team@GIPSA-lab (Cognitive Robotics, Interactive System and Speech processing). Context: This position is part of the ANR (French National Research Agency) BrainSpeak project aiming at developing a Brain-Computer Interface (BCI) for speech rehabilitation, based on large-scale neural recordings. This post-doc position aims at developing new machine learning algorithms to improve the conversion of neural signals into an intelligible acoustic speech signal. Mission: Investigate deep learning approaches to map intracranial recordings (ECoG) to speech features (spectral, articulatory, or linguistic features). A particular focus will be put on 1) weakly or self-supervised training in order to deal with unlabeled, limited and sparse datasets, 2) introducing prior linguistic information for regularization (e.g. thanks to a neural language model) and 3) online adaptation of the conversion model to cope with potential drift in time of the neural responses. Requirement and Profile: • PhD in machine learning, signal/image/speech processing • Advanced knowledges in deep learning • Excellent programming skills (mostly Python) • Fluent in English Duration: 12 months Salary (before tax) / Month €: Depending on the experience Starting date: Early 2020 How to apply: Send a cover letter, a resume, and references by email to: • Dr. Thomas Hueber, thomas.hueber@gipsa-lab.fr • Dr. Laurent Girin, laurent.girin@grenoble-inp.fr • Dr. Blaise Yvert, blaise.yvert@inserm.fr Applications will be processed as they arise.
| ||||||||||
6-16 | (2020-01-10) Pre-Doc RESEARCH CONTRACT (for 3 months, extendable to one year), University odf the Basque Country, Leioa (Bizkaia), Spain One Pre-Doc RESEARCH CONTRACT (for 3 months, extendable to one year) is open for the study, development, integration and evaluation of machine learning software tools, and the production of language resources for Automatic Speech Recognition (ASR) tasks.
Applications are welcome for one graduate (Pre-Doc) research contract for the study, development, integration and evaluation of machine learning software tools, and the production of language resources for ASR tasks. The contract will be funded by an Excellence Group Grant by the Government of the Basque Country. Initially, the contract is for 3 months but, if performance is satisfactory, it will be extended at least to one year ?or even more, depending on the available budget?, with a gross salary of around 30.000 euros/year. The workplace is located in the Faculty of Science and Technology (ZTF/FCT) of the University of the Basque Country (UPV/EHU) in Leioa (Bizkaia), Spain. PROFILE We seek graduate (Pre-Doc) candidates with a genuine interest in computer science and speech technology. It will be required knowledge and skills in any (preferably all) of the following topics: machine learning (specifically deep learning), programming in Python, Java and/or C++ and signal processing. A master's degree in scientific and/or technological disciplines (especially computer science, artificial intelligence, machine learning and/or signal processing) will be highly valued. All candidates are expected to have excellent analysis and abstraction skills. Experience and interest in dataset construction will be also a plus. RESEARCH ENVIRONMENT The Faculty of Science and Technology (ZTF/FCT) of the University of the Basque Country (https://www.ehu.eus/es/web/ztf-fct) is a very active and highly productive academic centre, with nearly 400 professors, around 350 pre-doc and post-doc researchers and more than 2500 students distributed in 9 degrees. The research work will be carried out at the Department of Electricity and Electronics of ZTF/FCT in the Leioa Campus of UPV/EHU. The research group hosting the contract (GTTS, http://gtts.ehu.es) has deep expertise in speech processing applications (ASR, speaker recognition, spoken language recognition, spoken term detection, etc.) and language resource design and collection. If the candidate is interested in pursuing a research career, the contract would be compatible with master studies on the topics mentioned above or even a Ph.D. Thesis project within our research group, and further financing options (grants, other projects) could be explored. The nearby city of Bilbao has become an international destination, with the Guggenheim Bilbao Museum as its main attractor. Still, though sparkling with visitors from worldwide, Bilbao is a peaceful, very enjoyable medium-size city with plenty of services and leisure options, and mild weather, not so rainy as the evergreen hills surrounding the city might suggest. APPLICATION Applications including the candidate's CV and a letter of motivation (at most 1 page) explaining their interest in this position and how their education and skills fit the profile should be sent by e-mail ?using the subject 'GTTS research contract APPLICATION ref. 1/2020'? to Germán Bordel (german.bordel@ehu.eus) by Wednesday, January 29, 2020. The contract will start as soon as the position is filled.
| ||||||||||
6-17 | (2020-01-14) Post-doctoral fellow, at Yamagishi-lab, National Institute of Informatics, Tokyo, Japan. Post-doctoral fellow, at Yamagishi-lab, National Institute of Informatics, Japan.
| ||||||||||
6-18 | (2020-01-15) Responsable IA, H/F, ZAION, Levallois, France Zaion est désormais présente dans 4 pays européens et compte une trentaine de clients grands comptes. Les callbots traitent en moyenne, plus de 15 000 appels par jour en production. Basée à Levallois, Zaion compte 35 talents. Nous rejoindre, c?est prendre part à une aventure passionnante et innovante au sein d?une équipe dynamique, qui a l?ambition de devenir la référence sur le marché des robots conversationnels. Pour accompagner cette croissance, nous recrutons notre Responsable de IA H/F. Manager de l?équipe R&D, votre rôle est stratégique dans le développement et l?expansion de la société. Vous travaillez sur des solutions IA, qui permettent d?automatiser le traitement des appels téléphoniques en utilisant le traitement automatique de langage naturel et la détection des émotions dans la voix. Vos missions principales : - Vous participez à la création du pôle R&D de ZAION et piloterez nos projets de solutions IA et solutions vocales (défections des émotions dans la voix). - Construisez, adaptez et faites évoluer nos services de détection d?émotion dans la voix - Analysez de bases de données conséquentes de conversations pour en extraire les conversations émotionnellement pertinentes - Construisez une base de données de conversations labelisées avec des étiquettes émotionnelles - Formez et évaluez des modèles d'apprentissage automatique pour la classification d?émotion - Déployez vos modèles en production - Améliorez en continue le système de détection des émotions dans la voix Qualifications requises et expérience antérieure : -Vous avez une expérience de 5 ans minimum comme Data Scientist/Machine Learning appliqué à l?Audio et une expérience de 2 ans en encadrement . - Diplômé d?une école d?Ingénieur ou Master en informatique ou un doctorat en informatique mathématiques avec des compétences solides en traitements de signal (audio de préférence) - Solide formation théorique en apprentissage machine et dans les domaines mathématiques pertinents (clustering, classification, factorisation matricielle, inférence bayésienne, deep learning...) - La mise à disposition de modèles d'apprentissage machine dans un environnement de production serait un plus - Vous maîtrisez un ou plusieurs des langages suivants : Python, Frameworks de machine Learning/Deep Learning (Pytorch, TensorFlow,Sci-kit learn, Keras) et Javascript - Vous maîtrisez les techniques du traitement du signal audio - Une expérience confirmée dans la labélisation de grande BDD (audio de préférence) est indispensable ; - Votre personnalité : Leader, autonome, passionné par votre métier, vous savez animer une équipe en mode projet - Vous parlez anglais couramment Merci d?envoyer votre candidature à : alegentil@zaion.ai
| ||||||||||
6-19 | (2020-01-16) 12-month Postdoctoral research position, GIPSA Lab, Grenoble,France 12-month Postdoctoral research position in machine learning for neural speech decoding Place:GIPSA-lab (CNRS/UGA/Grenoble-INP) in collaboration with BrainTech laboratory (INSERM). Both laboratories are located on the same campus in Grenoble, France. Team: CRISSP team@GIPSA-lab (Cognitive Robotics, Interactive System and Speech processing). Context: This position is part of the ANR (French National Research Agency) BrainSpeak project aiming at developing a Brain-Computer Interface (BCI) for speech rehabilitation, based on large-scale neural recordings. This post-doc position aims at developing new machine learning algorithms to improve the conversion of neural signals into an intelligible acoustic speech signal. Mission: Investigate deep learning approaches to map intracranial recordings (ECoG) to speech features (spectral, articulatory, or linguistic features). A particular focus will be put on 1) weakly or self-supervised training in order to deal with unlabeled, limited and sparse datasets, 2) introducing prior linguistic information for regularization (e.g. thanks to a neural language model) and 3) online adaptation of the conversion model to cope with potential drift in time of the neural responses. Requirement and Profile: • PhD in machine learning, signal/image/speech processing • Advanced knowledges in deep learning • Excellent programming skills (mostly Python) • Fluent in English Duration: 12 months Salary (before tax) / Month €: Depending on the experience Starting date: Early 2020 How to apply: Send a cover letter, a resume, and references by email to: • Dr. Thomas Hueber, thomas.hueber@gipsa-lab.fr • Dr. Laurent Girin, laurent.girin@grenoble-inp.fr • Dr. Blaise Yvert, blaise.yvert@inserm.fr Applications will be processed as they arise.
| ||||||||||
6-20 | (2020-01-17) 30-month postdoctoral researcher IRISA Rennes, France IRISA (France) is looking for a 30-month postdoctoral researcher for topic of Natural Language Processing for Kids, starting in Spring 2020.
The project involves:
- prediction of age recommendations for texts
- generation of explanations
- generation of textual reformulations.
Parners are :
- linguists: MoDyCo lab
- computer scientists: IRISA lab, company Qwant, company Synapse Développement
- specialized journalists: French newspaper Libération.
More details: http://www.irisa.fr/en/page/30-month-postdoctoral-researcher-natural-language-processing-kids
| ||||||||||
6-21 | (2020-01-18) Ingénieur CDD , IRIT, Toulouse, France L'IRIT (Institut de Recherche en Informatique de Toulouse) recrute un ingénieur CDD pour le projet collaboratif LinTo (PIA - Programme d?Investissements d'Avenir), projet d?assistant conversationnel destiné à opérer en contexte professionnel pour proposer des services en lien avec le déroulement de réunions. Le poste à pourvoir est à l?interface des travaux réalisés par l'équipe SAMoVA (Traitement automatique de la parole et de l'audio ) et l'équipe MELODI (Traitement automatique des langues). La personne recrutée travaillera en collaboration étroite avec les membres de l?IRIT déjà impliqués dans le projet LinTO. Elle aura un rôle d?ingénieur support que ce soit du point de vue des activités de recherche que des tâches d?intégration qui devront être, à terme, réalisées sur la plateforme LinTO développée dans le cadre du projet par la société LINAGORA porteuse du projet.
Informations pratiques : Dossier de candidature : à envoyer avant le 15 février 2020. Détail de l'offre : https://www.irit.fr/SAMOVA/site/assets/files/engineer/OFFRE_INGENIEUR_IRIT_LINTO.pdf Salaire : selon la grille en vigueur et selon le profil et l'expérience
| ||||||||||
6-22 | (2020-01-19) Research engineer, Vocalid, Belmont, MA, USA
Speech Research Engineer @ Voice AI Startup! Location: Belmont, MA Available: Immediately VocaliD is a voice technology company that creates high-quality synthetic vocal identities for applications that speak. Founded on the mission to bring personalized voices to those who rely on voice prostheses, our ground-breaking innovation is now fueling the development of unique vocal persona for brands and organizations. VocaliD’s unique voices foster social connection, build trust and enhance customer experiences. Grounded in decades of research in speech signal processing and leveraging the latest advances in machine learning and big data, we have attracted ~ $4M in funding and garnered numerous awards including 2015 SXSW Innovation, 2019 Voice AI visionary, and 2020 Best Healthcare Voice User Experience. Learn more about our origins by watching our founder Rupal Patel's TED Talk. We are seeking a speech research engineer with machine learning expertise to join our dynamic and ambitious team that is passionate about Voice! Responsibilities: - Algorithm design and implementation - Research advances in sequence to sequence models of speech synthesis - Research advances in generative models of synthesis (autoregression, GAN, etc.) - Implement machine learning techniques for speech processing and synthesis - Design and implement natural language processing techniques into synthesis flow - Conduct systematic experiments to harden research methods for productization - Work closely with engineers to implement and deliver the final product - Present research findings in written publications and oral presentations Required Qualifications: - MS or PhD in Electrical or Computer Engineering, Computer Science or related field - Experience programming in C/C++, Python, - Experience with machine learning frameworks (Tensorflow, Keras, Pytorch, etc.) - Experience with Windows and Linux - Familiarity with cloud computing services (AWS, Google Cloud, Azure) - Must communicate clearly & effectively - Strong analytical and oral communication skills - Excellent interpersonal and collaboration skills Please submit a cover letter and resume to rupal@vocaliD.ai Visit us at www.vocalid.ai for more information about VocaliD.
| ||||||||||
6-23 | (2020-01-20) Senior and junior researchers, LumenAI, France L?entreprise
LumenAI est une start-up fondée il y a 4 ans par des académiciens et spécialisée dans l?apprentissage non-supervisée séquentiel. Forte de son activité R&D, elle propose un accompagnement scientifique et technique complet à ses clients. Les domaines concernés par son activité sont pour l?instant la maintenance prédictive, la cyber-sécurité, l?indexation de documents et l?analyse des réseaux sociaux. En parallèle, LumenAI commercialise ses propres outils de online clustering et de visualisation regroupés dans la plateforme: The lady of the lake.
Vos missions
Les tâches consistent à participer aux missions clients et à l?évolution de notre librairie de clustering « The lady of the Lake ». Enfin, 20% du temps de travail est consacré à un projet de recherche personnel lié aux activités de LumenAI.
Si le candidat est un profil senior, il lui sera également demandé de participer à la gestion des missions, de suivre nos data-scientists chez leurs client. Par exemple, il y a actuellement le besoin d?un encadrant pour une thèse Cifre (Rennes).
Si le candidat est un profil senior, il lui sera également demandé de participer à la gestion des missions et à l?encadrement d?une thèse Cifre. Votre profil
Autres informations et références
Le poste est à pourvoir dans nos locaux parisiens.
Rémunération: entre 45 et 60k? brut annuel selon expérience.
Le site démonstration de la plateforme ?The lady of the Lake. https://lakelady.lumenai.fr/#/
Les valeurs de LumenAI http://www.lumenai.fr/join-us
| ||||||||||
6-24 | (2020-01-21) Ingénieur 6 mois, GIPSA Lab, Grenoble, FranceInformations généralesRéférence : UMR5216-ALLBEL-017 La candidature se fait sur ce lien http://bit.ly/2MZUY68 MissionsContexte: ActivitésActivités principales: CompétencesConnaissances Contexte de travailGipsa-lab est une unité de recherche mixte du CNRS, de Grenoble INP, et de l'Université de Grenoble Alpes; elle est conventionnée avec Inria et l'Observatoire des Sciences de l'Univers de Grenoble.
| ||||||||||
6-25 | (2020-02-09) Fully-funded PhD position at GIPSA-lab , Grenoble, France Fully-funded PhD position at GIPSA-lab
(Grenoble, France) 'Automatic recognition and generation of cued speech using deep learning' in the context of the European project Comm4CHILD (http://comm4child.ulb.be) - Project description: http://comm4child.ulb.be/post/cnrs_gipsa_beautemps_esr10/gipsa_esr10/
| ||||||||||
6-26 | (2020-02-16) The Federal Criminal Office, Wiesbaden, Germany The Federal Criminal Police Office (BKA) in Germany (Wiesbaden) is currently advertising a job in the field of Forensic Speaker Recognition and Audio Analysis.
| ||||||||||
6-27 | (2020-02-20) Recrutement de doctorant·e·s - projet comm4CHILD , Université libre de Bruxelles, Belgique Recrutement de doctorant·e·s - projet comm4CHILD
Le projet Marie Sklodowska-Curie Innovative Training Network (MSC ITN, grant agreement 860766) « comm4CHILD » (dir : Cécile Colin & J. Leybaert, Université libre de Bruxelles) engage 15 doctorant·e·s dans différents domaines (biologie, cognition, langage) visant à optimiser la communication et l?inclusion sociale d?enfants présentant des troubles auditifs. Les institutions d?accueil sont localisées en Belgique, en Allemagne, en France, en Angleterre, en Norvège, et au Danemark.
Pour être éligible, il ne faut pas avoir résidé ou avoir mené une activité principale (travail, études) plus de 12 mois au cours des 3 années précédant le début du contrat de recherche dans le pays de l?institution d?accueil. Il faut également détenir un diplôme permettant de commencer un doctorat. Les candidat·e·s en possession du titre de docteur·e ou ayant une expérience de plus de 4 ans temps plein dans le domaine de la recherche ne peuvent pas postuler.
Par ailleurs, une bonne maîtrise de la langue française est souhaitée pour les projets de recherche suivants, dans la mesure où ces projets nécessitent d?interagir avec des enfants francophones qui présentent des troubles auditifs :
- Temporal course of auditory, labial, and manual signals in Cued Speech (CS) perception : an ERPs project
Institution d?accueil : Université libre de Bruxelles, Belgique (ESR 5) ; contact : ccolin@ulb.ac.be
- A cognitive analysis of Spelling skills in children with cochlear implant from various linguistic backgrounds (sign language, cued speech)
Institution d?accueil : Université libre de Bruxelles, Belgique (ESR 13) : contact : leybaert@ulb.ac.be and fchetail@ulb.ac.be
- The phonological body : body movements to accompany linguistic development in deaf children
Institution d?accueil : Centre Comprendre et Parler, Bruxelles, Belgique
Affiliation au doctorat à l?Université libre de Bruxelles, Belgique (ESR 15) ; contact : brigitte.charlier@ccpasbl.be
La date limite pour postuler est le 15 mars 2020. Les projets démarrent en Septembre-Octobre 2020 pour une durée de 36 mois.
Pour plus d'informations concernant les projets de recherche et les institutions d'accueil, les critères d'éligibilité, le salaire et les avantages, et les personnes à contacter, veuillez vous rendre sur le site web du projet: http://comm4child.ulb.be
Nous nous engageons à effectuer un recrutement inclusif et nous nous efforçons à créer une équipe de doctorant·e·s équilibrée en terme de genre. Les personnes présentant un handicap, et particulièrement les personnes malentendantes, sont encouragées à déposer leur candidature.
| ||||||||||
6-28 | (2020-02-22) Fully funded PhD position in Explainable deep learning methods , Uppsala University, Sweden ** Fully funded PhD position in Explainable deep learning methods for human-human and human-robot interaction** Department of Information Technology Uppsala University
Interested candidates should contact Prof. Ginevra Castellano by email (ginevra.castellano@it.uu.se) by Friday 13th of March at the latest to discuss the research project. Include the following documents in the email: - A CV, including list of publications (if any) and the names of two reference persons - Transcript of grades - A cover letter of maximum one page describing the scientific issues in the project that interest you and how your past experiences fit into the project
Summary of project’s topic Human-human interaction (HHI) relies on people’s ability to mutually understand each other, often by making use of multimodal implicit signals that are physiologically embedded in human behaviour and do not require the sender’s awareness. When we are engrossed in a conversation, we align with our partner: we unconsciously mimic each other, coordinate our behaviours and synchronize positive displays of emotion. This tremendously important skill, which spontaneously develops in HHI, is currently lacking in robots. This project aims at building on advances in deep learning, and in particular on the field of Explainable Artificial Intelligence (XAI), which offers approaches to increase the interpretability of the complex, highly nonlinear deep neural networks, to develop new machine learning-based methods that (1) automatically analyse and predict emotional alignment in HHI, and (2) bootstrap emotional alignment in human-robot interaction. More information about the project can be found here.
Requirements The ideal PhD candidate is a student with an MSc in Computer Science, Machine Learning, Artificial Intelligence, Robotics or related field with a broad mathematical knowledge as well as technical and programming skills. The components to be studied build on a number of mathematical techniques and the methods development involved in the project will require good command of the related areas; central are mathematical optimization and probability theory. Experience and/or interest in the social sciences are also required. See further eligibility requirements here.
Further information The project is a collaboration between the Uppsala Social Robotics Lab (Prof. Ginevra Castellano) and the MIDA (Methods for Image Data Analysis) group (Dr. Joakim Lindblad) at the Department of Information Technology, and the Uppsala Child and Baby Lab (Prof. Gustaf Gredebäck) at the Department of Psychology of Uppsala University. The student will be part of the Uppsala Social Robotics Lab at the Division of Visual Information and Interaction of the Department of Information Technology, and contribute to lab’s projects on the topic of co-adaptation in human-robot interactions. The student will also join the Graduate school of the Centre for Interdisciplinary Mathematics (CIM). The Uppsala Social Robotics Lab’s focus is on natural interaction with social artefacts such as robots and embodied virtual agents. This domain concerns bringing together multidisciplinary expertise to address new challenges in the area of social robotics, including mutual human-robot co-adaptation, multimodal multiparty natural interaction with social robots, multimodal human affect and social behaviour recognition, multimodal expression generation, robot learning from users, behaviour personalization, effects of embodiment (physical robot versus embodied virtual agent) and other fundamental aspects of human-robot interaction (HRI). State of the art robots are used, including the Pepper, Nao and Furhat robotic platforms. The fully funded PhD position is for four years.
-- Dr. Ginevra Castellano Professor Director, Uppsala Social Robotics Lab Department of Information Technology Uppsala University Box 337, 751 05 Uppsala, Sweden Webpage: http://user.it.uu.se/~ginca820/
| ||||||||||
6-29 | (2020-02-29) Web developer at ELDA, Paris, France The European Language resources Distribution Agency (ELDA), a company specialised in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a permanent Web Developer position.
| ||||||||||
6-30 | (2020-03-03) 15 early-stage researcher positions available within the COBRA Marie Sklodowska-Curie Innovative Training Network, Berlin, Germany15 early-stage researcher positions available within the COBRA Marie Sklodowska-Curie Innovative Training NetworkA call for applications is open for 15 three-year contracts offered to early-stage researchers (ESRs) wishing to enrol as PhD students in the framework of the Conversational Brains (COBRA) project. COBRA is a Marie Sklodowska-Curie Innovative Training Network funded by the European Commission within the Horizon 2020 programme. It aims to train ESRs to accurately characterize and model the linguistic, cognitive and brain mechanisms that allow conversation to unfold in both human-human and human-machine interactions. The network comprises ten academic research centers on language, cognition and the human brain, and four industrial partners in web-based speech technology, conversational agents and social robots, in ten countries. The partners? combined expertise and high complementarity will allow COBRA to offer ESRs an excellent training programme as well as very strong exposure to the non-academic sector. Deadline for submission of applications: 31 March 2020 All information are available here: https://www.cobra-network.eu/ LIST OF AVAILABLE POSITIONSESR1: Categorization of speech sounds as a collective decision process ESR2: Brain markers of between-speaker convergence in conversational speech ESR3: Does prediction drive neural alignment in conversation? ESR4: Brain indexes of semantic and pragmatic prediction ESR5: Communicative alignment at the physiological level ESR6: Alignment in human-machine spoken interaction ESR7: Contribution of discourse markers to alignment in conversation ESR8: Discourse units and discourse alignement ESR9: Acoustic-phonetic alignment in synthetic speech ESR10: Phonetic alignment in a non-native language ESR11: Conversation coordination and mind-reading ESR12: The influence of alignment ESR13: Parametric dialogue synthesis: from separate speakers to conversational interaction ESR14: Gender and vocal alignment in speakers and robots ESR15: Endowing robots with high-level conversational skills --
**********************************************
NOTE MY EMAIL ADRESS: fuchs@zas.gwz-berlin.de WILL EXPIRE SOON.
NEW E-Mail: fuchs@leibniz-zas.de
| ||||||||||
6-31 | (2020-03-07) Postdoc researcher in deepNN, University of Glasgow, UK (UPDATED)
APPLICATION DEADLINE EXTENDED TO 13 APRIL
The School of Computing Science at the University of Glasgow is looking for an excellent and enthusiastic researcher to join the ESRC-funded international collaborative project 'Using AI-Enhanced Social Robots to Improve Children's Healthcare Experiences.' This is a new 3-year project which aims to investigate how a social robot can help children cope with potentially painful experiences in a healthcare setting. The system developed in the project will be tested through a hospital-based clinical trial at the end of the project. In Glasgow, we are looking for a researcher with expertise in applying deep neural network models to the automated analysis of multimodal human behaviour, ideally along with experience integrating such systems into an end-to-end interactive system.
You will be working together with Dr. Mary Ellen Foster in the Glasgow Interactive Systems Section (GIST); you will collaborate closely with Dr. Ron Petrick and his team from the Edinburgh Centre for Robotics at Heriot-Watt University, and will also collaborate with medical and social science researchers at several Canadian universities including University of Alberta, University of Toronto, Ryerson University, McMaster University, and Dalhousie University.
GIST provides an ideal ground for academic growth. It is the leader of a recently awarded Centre for Doctoral Training that is providing 50 PhD scholarships in the next five years in the area of socially intelligent artificial intelligence. In addition, its 7 faculty members have accumulated more than 25,000 Scholar citations and have been or are leading large-scale national and European projects (including the ERC Advanced Grant 'Viajero', the Network Plus grant 'Human Data Interaction', the FET-Open project 'Levitate', and the H2020 project MuMMER) for a total of over £20M in the last 10 years.
The post is full time with funding up to 27 months in the first instance.
For more information and to apply online, please see https://www.jobs.ac.uk/job/BYX734/research-associate.
Please email MaryEllen.Foster@glasgow.ac.uk with any informal enquiries.
It is the University of Glasgow’s mission to foster an inclusive climate, which ensures equality in our working, learning, research and teaching environment.
We strongly endorse the principles of Athena SWAN, including a supportive and flexible working environment, with commitment from all levels of the organisation in promoting gender equality.
| ||||||||||
6-32 | (2020-03-10) Fully-funded PhD position at GIPSA-lab, Grenoble, France Fully-funded PhD position at GIPSA-lab
| ||||||||||
6-33 | (2020-03-15) Postdoc, Fondazione Bruno Kessler, Trento, Italy Call for PostDoc Position Università Politecnica delle Marche, Fondazione Bruno Kessler and PerVoice SpA, the partners of the AGEVOLA project funded by Fondazione Caritro, are seeking for an enthusiastic Post-doc researcher to work on advanced machine-learning based solutions for speech enhancement and speaker diarization in call-center communications. Pre-requisites: • PhD in Electronics Engineering, Telecommunications Engineering or Information Engineering or eventually in Mathematics or Physics • Research experience in the field of Digital Audio Processing and Machine Learning. A solid background in Speech Processing is appreciated. • Programming knowledge: Python, C/C++ • Competence in setup and usage of the Unix/Linux SW environment, and experience with Python libraries for Machine Learning (like Pytorch and Tensorflow) Duration and starting date: 21 months starting from Summer 2020. Work Location: Università Politecnica delle Marche, Ancona, Italy. The researcher will tightly cooperate with the company PerVoice S.p.A. and the Fondazione Bruno Kessler, both located in Trento, Italy. Therefore, some working time will be spent in Trento. The possibility to work remotely is foreseen as well. Gross Salary: 63.000 Euro for the entire contract duration. Contacts: • Stefano Squartini - Università Politecnica delle Marche - s.squartini@univpm.it • Alessio Brutti - Fondazione Bruno Kessler - brutti@fbk.eu • Leonardo Badino - PerVoice S.p.A - leonardo.badino@pervoice.it
| ||||||||||
6-34 | (2020-03-25) Faculté des lettres de Sorbonne Université propose au concours plusieurs postes d'ATER La faculté des lettres de Sorbonne Université propose au concours plusieurs postes d'ATER
| ||||||||||
6-35 | (2020-04-02) Fully funded PhD position, Idiap Research Institute, Martigny, Switzerland There is a fully funded PhD position open at Idiap Research Institute on Low Level
| ||||||||||
6-36 | (2020-04-15) Professor (W3) on AI for Language Technologies, Germany The Institute of Anthropomatics and Robotics within the Division 2 - Informatics, Economics, and Society - is seeking to fill, as soon as possible, the position of a
Professor (W3) on AI for Language Technologies
The research area of the professorship are methods of artificial intelligence, especially machine learning, for the realization of intelligent systems for human-machine interaction and their evaluation in real-world applications. Possible research topics include natural language understanding, spoken language translation, automatic speech recognition, interactive and incremental learning from heterogeneous data, and the extraction of semantics from image, text and speech data. The professorship contributes to the 'Robotics and Cognitive Systems' focus of the KIT Department of Informatics.
In teaching, the professorship contributes to the education of students of the KIT Department of Informatics, among others in the fields of natural language understanding, neural networks, machine learning, and cognitive systems. Participation in undergraduate teaching of basic courses in computer science in the German language is expected, especially in the field of artificial intelligence. A transition period for acquiring German language skills is provided.
Active participation in interACT (International Center for Advanced Communication Technologies) is desired. Experience in the development of innovations in application fields of artificial intelligence, e.g. with industrial cooperation partners, is advantageous.
We are looking for a candidate with outstanding scientific qualifications and an outstanding international reputation in the above-mentioned field of research. Skills in the acquisition of third-party funding and the management of scientific working groups are expected, as well as very good didactic skills both in basic computer science lectures and in in-depth courses on subjects within the research area of the professorship.
Active participation in the academic tasks of the KIT Department of Informatics and in the self-administration of KIT in Division II is expected as well as participation in the KIT Center Information - Systems - Technologies.
According to § 47 of the Baden-Wuerttemberg University Act (Landeshochschulgesetz des Landes Baden-Württemberg), a university degree, teaching aptitude and exceptional competence in scientific work are required.
KIT is an equal opportunity employer. Women are especially encouraged to apply. Applicants with disabilities will be given preferential consideration if equally qualified. The KIT is certified as a family-friendly university and offers part-time employment, leaves of absence, a Dual Career Service and coaching to actively promote work-life-balance.
Applications with the required documents (curriculum vitae, degree certificates as well as a list of publications) and a perspective paper (maximum of three pages) should be sent by e-mail, preferably compiled into a single PDF document, to dekanat@informatik.kit.edu by 03.05.2020. For enquiries regarding this specific position please contact Professor Dr. Tamim Asfour, e-mail: asfour@kit.edu .
Links:
Karlsruhe Institute of Technology
Institute for Anthropomatics and Robotics
KIT Center Information, Systems, Technologies (KCIST)
| ||||||||||
6-37 | (2020-04-24) PROFESSORSHIP (junior/senior) @ KU Leuven, Belgium PROFESSORSHIP (junior/senior) @ KU Leuven: EMBODIED LEARNING MACHINES
| ||||||||||
6-38 | (2020-04-22) 3 research associate positions at Heriot-Watt University, Edinburgh, UK The Interaction Lab at Heriot-Watt University, Edinburgh, seeks to fill 3 research associate positions in Conversational AI and NLP within the following research areas: 1. Response Generation for Conversational AI
2. Abuse Detection and Mitigation for Conversational Agents
3. Controllable Response Generation
Closing date: 1st June
Start date: 1st September 2020 (negotiable)
Salary range: £26,715 - £30,942 (Grade 6 without PhD degree) or £32,817 - £40,322 (Grade 7 with PhD degree)
The positions are associated with the EPSRC-funded projects 'Designing Conversational Assistants to Reduce Gender Bias' (positions 1&2) and 'AISEC: AI Secure and Explainable by Construction? (position 3), which are both in collaboration with the University of Edinburgh and other academic and industrial partners, including the University of Glasgow, the University of Strathclyde, the Scottish Government, NEC Labs Europe, Huggingface, and the BBC.
For informal enquiries please contact Prof. Verena Rieser <v.t.rieser@hw.ac.uk>. Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With campuses and students across the entire globe we span the world, delivering innovation and educational excellence in business, engineering, design and the physical, social and life sciences. This email is generated from the Heriot-Watt University Group, which includes:
| ||||||||||
6-39 | (2020-04-20) Fully-funded PhD studentships in Speech and Language Technologies at the University of Sheffield,UK UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies (SLT) and their Applications
Department of Computer Science
Faculty of Engineering
University of Sheffield, UK
Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications
** Applications now open for last remaining September 2020 intake places **
Deadline for applications: 31 May 2020.
Speech and Language Technologies (SLTs) are a range of Artificial Intelligence (AI) approaches which allow computer programs or electronic devices to analyse, produce, modify or respond to human texts and speech. SLTs are underpinned by a number of fundamental research fields including natural language processing, speech processing, computational linguistics, mathematics, machine learning, physics, psychology, computer science, and acoustics. SLTs are now established as core scientific/engineering disciplines within AI and have grown into a world-wide multi-billion dollar industry.
Located in the Department of Computer Science, at the University of Sheffield ? a world leading research institution in the SLT field ? the UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies and their Applications is a vibrant research centre that also provides training in engineering skills, leadership, ethics, innovation, entrepreneurship, and responsibility to society.
Apply now: https://slt-cdt.ac.uk/apply/
The benefits:
About you:
We are looking for students from a wide range of backgrounds interested in Speech and Language Technologies.
Applying:
Applications are now sought for the September 2020 intake. The deadline is 31 May 2020.
Applications will be reviewed within 6 weeks of the deadline and short-listed applicants will be invited to interview. Interviews will be held in Sheffield or via videoconference.
See our website for full details and guidance on how to apply: https://slt-cdt.ac.uk
For an informal discussion about your application please contact us by email at: sltcdt-enquiries@sheffield.ac.uk
By replying to this email or contacting sltcdt-enquiries@sheffield.ac.uk you consent to being contacted by the University of Sheffield in relation to the CDT. You are free to withdraw your permission in writing at any time.
| ||||||||||
6-40 | (2020-04-28) Fully-funded PhD in speech synthesis - University of Grenoble-Alps - France Fully-funded PhD in speech synthesis - University of Grenoble-Alps - France
Funding: THERADIA project funded by BPI-France with industrials (SBT, ATOS, Pertimm). Providing full salary for 3 years (2135? gross monthly) and a generous package for travel & other costs Deadline: applications will be considered on an ongoing basis until the position is filled Full details on the topics and how to apply: https://bit.ly/3cW1gy9
| ||||||||||
6-41 | (2020-04-30) Professur für Informatik (m/w/d), Karlsruhe, Germany offer in english: https://euraxess.ec.europa.eu/jobs/517917 DAS DUALE HOCHSCHULSTUDIUM MIT ZUKUNFT. Professur für Informatik (m/w/d)Besoldungsgruppe W2, Kennziffer: KA-5/111
Zu den Aufgaben gehören die Lehre, die angewandte Forschung und die Weiterbildung im Studiengang Informatik.
| ||||||||||
6-42 | (2020-04-25) PhD at Univ. Avignon, France Nous recherchons des candidat.es motivé.s pour une thèse en
| ||||||||||
6-43 | (2020-04-28) 1 year post-doc/engineer position at LIA, Avignon France *1 year post-doc/engineer position at LIA, Avignon France, in the Vocal
| ||||||||||
6-44 | (2020-05-10)Researcher at GIPSA-Lab Grenoble, France L'équipe CRISSP (Cognitive Robotics, Interactive Systems & Speech Robotics) du GIPSA-Lab recherche un(e) candidat(e) motivé(e) pour travailler sur la synthèse de parole appliquée à l'interaction face-à-face incarnée. Il(Elle) devra avoir des compétences en apprentissage automatique.
Le travail s'inscrit dans le cadre du projet THERADIA, financé par BPI-France et mené en partenariat avec des laboratoires (EMC, LIG) et des industriels (SBT, ATOS, Pertimm).
Les candidatures seront examinées de manière continue jusqu'à ce que le poste soit pourvu.
Tous les détails sur les sujets et comment postuler: https://bit.ly/3cW1gy9
Contact: gerard.bailly@gipsa-lab.fr
| ||||||||||
6-45 | (2020-05-11) Tenure-track researcher at CWI, Amsterdam, The Netherlands We have an open position for a tenure-track researcher at CWI (https://www.cwi.nl/) within our Distributed & Interactive Systems (DIS) group (https://www.dis.cwi.nl/).
| ||||||||||
6-46 | (2020-05-12) Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications , UKRI Centre for doctoral training, Sheffield, UK UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies (SLT) and their Applications Department of Computer Science Faculty of Engineering University of Sheffield
Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications ** Applications now open for last remaining September 2020 intake places ** Deadline for applications: 31 May 2020. What makes the SLT CDT different:
The benefits:
About you: We are looking for students from a wide range of backgrounds interested in Speech and Language Technologies.
Applying: Applications are now sought for the September 2020 intake.
We operate a staged admissions process, with application deadlines throughout the year. The final deadline for applications for the remaining places is 31 May 2020.
Applications will be reviewed within 6 weeks of each deadline and short-listed applicants will be invited to interview. Interviews will be held in Sheffield. In some cases, because of the high volume of applications we receive, we may need more time to assess your application. If this is the case, we will let you know if we intend to do this.
See our website for full details and guidance on how to apply: slt-cdt.ac.uk For an informal discussion about your application please contact us by email at: sltcdt-enquiries@sheffield.ac.uk By replying to this email or contacting sltcdt-enquiries@sheffield.ac.uk you consent to being contacted by the University of Sheffield in relation to the CDT. You are free to withdraw your permission in writing at any time.
| ||||||||||
6-47 | (2020-05-20) University assistant position, Johannes Kepler University, Linz, Austria We are happy to announce a position as a university assistant at the *Institute of
| ||||||||||
6-48 | (2020-05-26) Fully funded PhD position in data-driven socially assistive robotics,Uppsala University, Sweden ** Fully funded PhD position in data-driven socially assistive robotics** Uppsala Social Robotics Lab Department of Information Technology Uppsala University, Sweden
Uppsala University is a comprehensive research-intensive university with a strong international standing. Our mission is to pursue top-quality research and education and to interact constructively with society. Our most important assets are all the individuals whose curiosity and dedication make Uppsala University one of Sweden’s most exciting workplaces. Uppsala University has 46.000 students, 7.300 employees and a turnover of SEK 7.3 billion. The Department of Information Technology holds a leading position in research as well as teaching at all levels. The department has 280 employees, including 120 faculty, 110 PhD students, and 30 research groups. More than 4000 students are enrolled annually. The Uppsala Social Robotics Lab (https://usr-lab.com) led by Prof. Ginevra Castellano aims to design and develop robots that learn to interact socially with humans and bring benefits to the society we live in, for example in application areas such as education and assistive technology.
We are collecting expressions of interest for an upcoming PhD position in data-driven socially assistive robotics for medical applications within a project funded by Uppsala University’s WoMHeR (Women’s Mental Health during the Reproductive lifespan) Centre, in collaboration with the Department of Neuroscience.
The PhD project will include the development and evaluation of novel machine learning-based methods for robot-assisted diagnosis of women’s depression around childbirth via automatic analysis of multimodal user behaviour in interactive scenarios.
The student will be part of the Uppsala Social Robotics Lab at the Division of Visual Information and Interaction of the Department of Information Technology. The Uppsala Social Robotics Lab’s focus is on natural interaction with social artefacts such as robots and embodied virtual agents. This domain concerns bringing together multidisciplinary expertise to address new challenges in the area of social robotics, including mutual human-robot co-adaptation, multimodal multiparty natural interaction with social robots, multimodal human affect and social behavior recognition, multimodal expression generation, robot learning from users, behavior personalization, effects of embodiment (physical robot versus embodied virtual agent) and other fundamental aspects of human-robot interaction (HRI). State of the art robots are used, including the Pepper, Nao and Furhat robotic platforms.
The position is for four years. Rules governing PhD students are set out in the Higher Education Ordinance chapter 5, §§ 1-7 and in Uppsala University's rules and guidelines http://regler.uu.se/?languageId=1.
How to send expressions of interest: To express your interest, you should send to Ginevra Castellano (ginevra.castellano@it.uu.se) by the 10th of June a description of yourself, your research interests, reasons for applying for this particular PhD position and past experience (max. 3 pages), a CV, copies of relevant university degrees and transcripts, links to relevant publications and your MSc thesis (or a summary in case the thesis work is ongoing) and other relevant documents. Candidates are encouraged to provide contact information to up to 3 reference persons. We would also like to know your earliest possible date for starting.
Requirements: Qualifications: The candidates must have an MSc degree in computer science or related areas relevant to the PhD topics. Good programming skills are required and expertise in machine learning appreciated. The PhD position is highly interdisciplinary and requires an understanding and/or interest in psychology and social sciences and willingness to work in an interdisciplinary team.
Working in Sweden: Sweden is a fantastic place for living and working. Swedes are friendly and speak excellent English. The quality of life is high, with a strong emphasis on outdoor activities. The Swedish working climate emphasizes an open atmosphere, with active discussions involving both junior and senior staff. PhD students are full employees, with competitive salaries, pension provision and five weeks of paid leave per year. Spouses of employees are entitled to work permits. Healthcare is free after a small co-pay and the university subsidizes athletic costs, such as a gym membership. The parental benefits in Sweden are among the best in the world, including extensive parental leave (for both parents), paid time off to care for sick children, and affordable daycare. Upon completion of the PhD degree, students are entitled to permanent residency to find employment within Sweden.
| ||||||||||
6-49 | (2020-06-01) Poste de doctorant financé à l'Université de Grenoble Alpes, France L'Université Grenoble Alpes recrute un?e doctorant?e (3 ans) entièrement financé?e à
| ||||||||||
6-50 | (2020-06-10) 2 post-docs positions at UTDallas, Texas, USA POST-DOCTORAL POSITION #1 Center for Robust Speech Systems: Robust Speech Technologies Lab
Developing robust speech and language technologies (SLT) for naturalistic audio is the most challenging topic in the broader class of machine learning problems. CRSS-RSTL stands at the forefront of this initiative by making available the largest (150,000 hours) publicly available naturalistic corpus in the world. The FEARLESS STEPS corpus is the collection of multi-speaker time synchronized multi-channel audio from all of NASA’s 12 Apollo Manned Missions. Deployment of such ambitious corpora requires development of state-of-the-art support infrastructure using multiple technologies working synchronously to provide meaningful information to researchers from the science, technology, historical archives, and educational communities. To this end, we are seeking a post-doctoral researcher in the area of speech and language processing and machine learning. The researcher will collaboratively aid in the development of speech, natural language, and spoken dialog systems for noisy multi-channel audio streams. Overseeing digitization of analog tapes, community outreach and engagement, and assisting in cutting edge SLT research are also important tasks for the project. Those interested should send an email with their resume and areas of interest to John.Hansen@utdallas.edu. More information can be found on our website: CRSS–RSTLab (Robust Speech Technologies Lab) at https://crss.utdallas.edu/
POST-DOCTORAL POSITION #2 Center for Robust Speech Systems: Cochlear Implant Processing Lab
Cochlear implants are one of the most successful solutions of replacing hearing sensation via an electronic device. However, the search for better sound coding and electrical stimulation strategies could be significantly accelerated by developing a flexible, powerful, portable speech processor for cochlear implants compatible with current smartphones/tablets. We are developing CCi-MOBILE, the next generation of such a research platform, one that will be more flexible and computationally powerful than clinical research devices that will enable implementation and long-term evaluation of advanced signal processing algorithms in naturalistic and diverse acoustic environments. To this end, we are seeking a post-doctoral researcher in the area of cochlear implant signal processing and embedded hardware/systems design. The researcher will collaboratively aid in the development of an embedded (FPGA-based) hardware (PCBs) for speech processing applications. Firmware development in Verilog and Java (Android) for DSP algorithms implementation is also an important task for the project. Those interested should send an email with their resume and areas of interest to John.Hansen@utdallas.edu. More information can be found on our website: CRSS–CILab (Cochlear Implant Processing Lab) at https://crss.utdallas.edu/CILab/
|