ISCApad #290 |
Saturday, August 06, 2022 by Chris Wellekens |
6-1 | (2022-03-17) un poste de doctorant(e) dans le domaine de la détection multimodale de deep fakes, IRISA, Lannion, France L?équipe EXPRESSION de l?IRISA lance un appel à candidatures pour un poste de doctorant ou doctorante dans le domaine de la détection multimodale de deep fakes.
Les détails de l'offre sont disponibles à cette adresse : MUDEEFA - MUltimodal DeEEp Fake detection using Text-To-Speech Synthesis, Voice Conversion and Lips Reading Le poste nécessite d'être titulaire d'un master en informatique ou d'un diplôme d'ingénieur donnant le titre de master en informatique. La thèse se déroulera à Lannion, dans les Côtes d?Armor, au sein de l?équipe EXPRESSION. Merci d'envoyer un CV détaillé, une lettre de motivation, une ou plusieurs lettres de référence et les résultats académiques du diplôme précédent (Master ou Ingénieur donnant le titre de Master) à tous les contacts indiqués dans le sujet avant le vendredi 8 avril 2022, limite stricte. Bien cordialement Arnaud Delhay-Lorrain
Arnaud Delhay-Lorrain - Associate Professor
IRISA - Université de Rennes 1
IUT de Lannion - Département Informatique
Rue Edouard Branly - BP 30219
F-22 302 LANNION Cedex
| |||
6-2 | (2022-03-18) 3 speech-to-speech translation positions available at Meta/Facebook FAIR We are seeking research scientists, research engineers and postdoctoral researchers with expertise on speech translation and related fields to join our team.
FAIR?s mission is to advance the state-of-the-art in artificial intelligence through open research for the benefit of all. As part of this mission, our goal is to provide real-time, natural-sounding translations at near-human level quality. The technology we develop will enable multilingual live communication. We aim for our technology to be inclusive: it should support both written and unwritten languages. Finally, in order to preserve the authenticity of the original content, especially for more creativity related content, we aim to preserve non-lexical elements in the generated audio translations. Ideal candidates will have expertise on speech translation or related fields such as speech recognition, machine translation or speech synthesis. Please send email to juancarabina@fb.com with a CV if you are interested in applying.
| |||
6-3 | (2022-03-21) Thèse ou postdoc at Laboratoire d'Informatique de Grenoble, France Sujet de thèse ou de postdoctorat dans le cadre du projet Popcorn (projet collaboratif avec deux entreprises)
encadrée par Benjamin Lecouteux, Gilles Sérasset et Didier Schwab (Laboratoire d?Informatique de Grenoble, Groupe d?Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)
Titre : Peuplement OPérationnel de bases de COnnaissances et Réseaux Neuronaux
Le projet aborde le problème de l?enrichissement semi-automatisé d?une base de connaissance au travers de l?analyse automatique de textes. Afin d?obtenir une innovation de rupture dans le domaine du Traitement Automatique du Langage Naturel (TALN) pour les clients sécurité et défense, le projet se focalise sur le traitement du français (même si les approches retenues seront par la suite généralisables à d?autres langues). Les travaux aborderont différents aspects :
? L?annotation automatique de documents textuels par la détection de mentions d?entités présentes dans la base de connaissance et leurs désambiguïsation sémantique (polysémie, homonymie) ;
? La découverte de nouvelles entités (personnes, organisations, équipements, événements, lieux), de leurs attributs (âge d?une personne, numéro de référence d?un équipement, etc.), et des relations entre entités (une personne travaille pour une organisation, des personnes impliquées dans un événement, ...). Une attention particulière sera donnée au fait de pouvoir s?adapter souplement à des évolutions de l?ontologie, la prise en compte de la place de l?utilisateur et de l?analyste pour la validation/capitalisation des extractions effectuées.
Le projet se focalise autour des trois axes de recherches suivants :
? Génération de données synthétiques textuelles à partir de textes de référence ;
? La reconnaissance des entités d?intérêt, des attributs associés et des relations entre les entités.
? La désambiguisation sémantique des entités (en cas d?homonymie par exemple)
Profil recherché:
- Solide expérience en programmation & machine learning pour le Traitement Automatique de Langues (TAL), notamment l?apprentissage profond
- Master/Doctorat Machine Learning ou informatique, une composante TAL ou linguistique computationnelle sera un plus apprécié
- Bonne connaissance du français
Détails pratiques:
- Début de la thèse rentrée 2022
- Contrat doctoral à temps plein au LIG (équipe Getalp) pour 3 ans (salaire: min 1768? brut mensuel)
- ou Contrat postdoctoral à temps plein au LIG (équipe Getalp) pour 20 mois (salaire: min 2395? brut mensuel)
Environnement scientifique:
Comment postuler ?
Les candidatures doivent contenir : CV + lettre/message de motivation + notes de master + lettre(s) de recommandations; et être adressées à Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr), Gilles Sérasset (gilles.serasset@univ-grenoble-alpes.fr) et Didier Schwab (Didier.Schwab@univ-grenoble-alpes.fr)
| |||
6-4 | (2022-04-04) PhD position at INRIA-LORIA, Nancy, France 2022-04676 - PhD Position F/M Nongaussian models for deep learning based audio signal processing Level of qualifications required :Graduate degree or equivalent Fonction : PhD Position Context The PhD student will join the Multispeech team of Inria,that is the largest French research group in the field of speech processing. He/she will benefit from the research environment and the expertise in audio signal processing and machine learning of the team, which includes many researchers, PhD students, post-docs, and software engineers working in this field. He/she will be supervised by Emmanuel Vincent (Senior Researcher, Inria), and Paul Magron (Researcher, Inria). Assignment Audio signal processing and machine listeningsystems have achieved considerable progress over the past years, notably thanks to the advent of deep learning. Such systems usually process a timefrequency representation of the data, such as a magnitude spectrogram, and model its structure using a deep neural network (DNN). Generally speaking, these systems implicitly rely on the local Gaussian model [1],that is an elementary statistical model for the data. Even though it is convenient to manipulate, this model builds upon several hypotheses which are limiting in practice: (i) circular symmetry, which boils down t o discarding the phase information (= the argument of the complex-valued time-frequency coefficients); (ii) independence of the coefficients, which ignores the inherent structure of audio signals (temporal dynamics, frequency dependencies); and (iii)Gaussian density, which is not observed in practice. Statistical audio signal modeling is an active research field. However, recent advances in this field are usually not leveraged in deep learning-based approaches, thus their potential is currently underexploited. Besides, some of these advances are not mature enough to be fully deployed yet. Therefore, the objective of this PhD is to design advanced statistical signal models for audio which overcome the limitations of the local Gaussian model, while combining them with DNN-based spectrogram modeling. The developed approaches will be applied to audio source separation and speech enhancement. Main activities The main objectives of the PhD student will be: 1. To develop structured statistical models for audio signals, which alleviate the limitations of the local Gaussian model. In particular, t he PhD student will focus on designing models by leveraging properties that originate from signal analysis, such as the temporal continuity [2] or the consistency of the representation [3], in order to favor interpretability and meaningfulness of the models. For instance, alpha-stable distributions have been exploited in audio for their robustness [4]. Anisotropic models are an interesting research direction since they overcome the circular symmetry assumption, while enabling an interpretable parametrization of the statistical moments [5]. Finally, a careful design of the covariance matrix allows for explicitly incorporating time and frequency dependencies [6]. 2. To combine these statistical models withDNNs. This raises several technical difficulties regarding the design of, e.g., the neural architecture, the loss function, and the inference algorithm. The student will exploit and adapt the formalism developed in Bayesian deep learning, notably the variational autoencoding framework [7], as well as the inference procedures developed in DNN-free nongaussian models [8]. 3. To validate experimentally these methods on realistic sound datasets. To that end, the PhD student will use public datasets such as LibriMix (speech) and MUSDB (music), which are reference datasets for source separation and speech enhancement. The PhD student will disseminate his/her research results in international peer-reviewed journals and conferences. In order to promote reproducible research, these publications will be self-archived at each step of the publication lifecycle, and accessible through open access repositories (e.g., arXiv, HAL). The code will be integrated to Asteroid, that is the reference soDware for source separation and speech enhancement developed by Multispeech. Bibliography [1] E. Vincent, M. Jafari, S. Abdallah, M. Plumbley, M. Davies,Probabilistic modeling paradigms for audio source separation, Machine Audition: Principles, Algorithms and Systems, p.162–185, 2010. [2] T. Virtanen, Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 15, no. 3, pp.1066-1074, 2007. [3] J. Le Roux, N. Ono, S. Sagayama, Explicit consistency constraints for STFT spectrograms and their application to phase reconstruction, Proc. SAPA, 2008. [4] S. Leglaive, U. Şimşekli, A. Liutkus, R. Badeau and G. Richard,Alpha-stable multichannel audio source separation, Proc. IEEE ICASSP, 2017. [5] P. Magron, R. Badeau, B. David, Phase-dependent anisotropic Gaussian model for audio source separation, Proc. IEEE ICASSP, 2017. [6] M. Pariente, Implicit and explicit phase modeling in deep learning-based source separation, PhD thesis - Université de Lorraine, 2021. [7] L. Girin, S. Leglaive, X. Bie,J. Diard, T. Hueber, X. Alameda-Pineda,Dynamical variational autoencoders: A comprehensive review, Foundations and Trends in Machine Learning, vol. 15, no. 1-2, 2021. General Information Theme/Domain : Language, Speech and Audio Town/city : Villers lès Nancy Inria Center : CRI Nancy - Grand Est Starting date : 2022-10-01 Duration of contract : 3 years Deadline to apply : 2022-05-02 Contacts Inria Team : MULTISPEECH PhD Supervisor : Magron Paul / paul.magron@inria.fr About Inria Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, oDen at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact. The keys to success Upload your complete application data. Applications will be assessed on a rolling basis, thus it is advised to apply as soon as possible. Instruction to apply Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment. Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities. Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed. [8] P. Magron, T. Virtanen, Complex ISNMF: a phase-aware model for monaural audio source separation, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 27, no. 1, pp. 20-31, 2019. Skills Master or engineering degree in computer science, data science, signal processing, or machine learning. Professional capacity in English (spoken, read, and written). Some programming experience in Python andin somedeep learning framework (e.g., PyTorch). Previous experience and/or interest for speech and audio processing is a plus. Benefits package Restauration subventionnée Transports publics remboursés partiellement Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement) Possibilité de télétravail (après 6 mois d'ancienneté) et aménagement du temps de travail Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.) Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria) Accès à la formation professionnelle Sécurité sociale Remuneration Salary: 1982€ gross/month for 1st and 2 year. 2085€ gross/month for 3rd year. Monthly salary after taxes : around 1594€ for 1st and 2 year. 1677€ for 3rd year
| |||
6-5 | (2022-04-05) Junior professor position at Université du Mans, France L?Université du Mans ouvre une Chaire Professeur Junior en traitement du langage multimodal.
Les candidatures sont ouvertes sur Galaxie à déposer avant le 2 mai 2022.
Projet de recherche / Description of the research projectL?objectif principal est de développer une IA de traitement du langage multimodale et multilingue qui repose sur un espace de représentation commun pour les modalités parole et texte dans différentes langues. Le ou la candidat.e devra développer ses activités de recherche afin de renforcer le caractère transverse de ces représentations à travers une combinaison pertinente de modalités (par ex. : vidéo et texte ou texte et parole), de tâches (par ex. : caractérisation du locuteur et synthèse de la parole, compréhension de la parole et traduction automatique, reconnaissance de la parole et synthèse de résumé?automatique) et de langues. Ses travaux de recherche tendront à?développer des systèmes automatiques intégrant l?humain au c?ur du traitement en utilisant des approches d?apprentissage actif et en explorant les problématiques d?expliquabilité?et d?interpretabilité afin de permettre à l?utilisateur naïf d?enseigner au système automatique ou d?en extraire des éléments compréhensibles. Ce projet visera également le renforcement de collaborations existantes (Facebook, Orange, Airbus) ou la création de nouveaux partenariats (Oracle, HuggingFace?). The research project should take place in the LST team goal that aims at developping a multimodal and multilingual representation space for speech and text modalities. The Junior Professor is expected to develop his/her own research diretions between the topics already existing in the LST team and to develop hybrid approaches by mixing for instance speaker characterization and speech synthesis or speech translation and speech understanding. He/She should also integrate the strategy of the team to involve the human in the loop for deep learning systems and work towards a better explainability/interpretability of speech processing algorithms. Projet d'enseignement / Description of the teaching projectLe ou la candidat.e intégrera l?équipe pédagogique du Master en intelligence artificielle du département d?informatique de l?UFR Sciences et Techniques de l?Université? du Mans. Son implication aura pour but de renforcer les compétences en apprentissage profond (apprentissage auto-supervisé, GANs, Transfomer, méthodologies et protocoles pour l?IA?) mais également dans les infrastructures dédiées à l?apprentissage automatique et aux sciences des données (calcul distribué, SLURM, MPI), l?utilisation d?un cluster de calcul (ssh, temux, jupyter-lab, conda) ou le cloud computing. Fort de compétences reconnues en traitement automatique du langage et de la parole l?équipe pédagogique souhaite élargir son offre de formation en adaptant les contenus à d?autres types de données (images, séquences temporelles générées par différents types de capteurs, graphes?) afin de répondre aux besoins spécifiques du tissu industriel local et régional en apprentissage automatique. Cette action s?inscrira dans la volonté de l?équipe pédagogique de développer l?apprentissage et la formation continue en lien avec les partenaires industriels mais également à destination d?un public académique de chercheurs et enseignant chercheurs non-informaticiens souhaitant développer des compétences en apprentissage automatique. Teaching activities will take place within the Master of Computer Sciences and Artificial Intelligence from Le Mans University. The candidate is expected to strengthen the teaching on deep learning (self-supervised training, GANs, Transformers, machine learning methodology and protocols?) but also teach tools for distributed learning (SLURM, MPI, ssh, temux, jupyter-lab, conda?) and cloud computing. In mid terms, the candidate will contribute to the development of a continuing learning in artificial intelligence adapted to the need of local companies and industry but also for researchers non-specialist in computer sciences. Conditions de candidature / Application requirementsêtre titulaire d'un doctorat / hold a PhD Pour les candidats exerçant ou ayant cessé d'exercer depuis moins de dix-huit mois une fonction l'enseignant-chercheur, d'un niveau équivalent à celui de l'emploi à pourvoir, dans un établissement d'enseignement supérieur d'un État autre que la France: titres, travaux et tout élément permettant d'apprécier le niveau de fonction permettant d'accorder une dispense de doctorat. For candidates exercising or having ceased to exercise for less than eighteen months a function of teacher-researcher, of a level equivalent to that of the position to be filled, in a higher education establishment of a State other than France: titles, works and any element allowing to appreciate the level of function allowing to grant a dispence of doctorate. ContactAntoine LAURENT Antoine.laurent@univ-lemans.fr Anthony LARCHER Anthony.larcher@univ-lemans.fr
| |||
6-6 | (2022-04-06) Ph.D. Thesis position and Post-doc position at Loria-INRIA, Nancy, France Ph.D. Thesis position and Post-doc position at Loria-INRIA, Mutlispeech team, Nancy (France)
Multimodal automatic hate speech detection
https://jobs.inria.fr/public/classic/fr/offres/2022-04660
https://team.inria.fr/multispeech/fr/category/job-offers/
Hate speech expresses antisocial behavior. In many countries, online hate
speech is punishable by the law. Manual analysis of such content and its moderation
are impossible. An effective solution to this problem would be the automatic
detection of hate comments. Until now, for hate speech detection, only text
documents have been used. We would like to advance the knowledge about hate speech
detection by exploring a new type of document: audio documents.
We would like to develop a new methodology to automatically detect hate speech,
based on Machine Learning and Deep Neural Networks using text and audio.
Required Skills: The candidate should have theoretical and a moderate practical experience
with Deep Learning, including a good practice in Python and an understanding of deep
learning libraries like Pytorch. The knowledge of NLP or signal processing will be helpful.
Supervisors:
Irina Illina, Associate Professor, HDR, Lorrain University
Dominique Fohr, Senior Researcher, CNRS
https://members.loria.fr/IIllina/ illina@loria.fr
https://members.loria.fr/DFohr/ dominique.fohr@loria.fr
MULTISPEECH is a joint research team between the Université of Lorraine, Inria,
and CNRS. It is part of department D4 “Natural language and knowledge processing”
of LORIA. Its research focuses on speech processing, with particular emphasis to
multisource (source separation, robust speech recognition), multilingual (computer
assisted language learning), and multimodal aspects.
| |||
6-7 | (2022-04-07) Postes de traducteur peul-français (H/F), ELDA, Paris, France
| |||
6-8 | (2022-04-07) Postes de transcripteur peul-français (H/F), ELDA, Paris, France Contexte
| |||
6-9 | (2022-04-07) Contrat doctoral au Collegium Musicæ de Sorbonne Université, Paris, France Le Collegium Musicæ de Sorbonne Université propose un contrat doctoral sur le style vocal : Analyse par la Synthèse performative du style vocal - Porteurs de projet : Christophe d'Alessandro et Céline Chabot-Canet Le propos de cette thèse est l’étude du style vocal par le paradigme d’analyse par la synthèse des détails se trouvent ici (aller sur l'onglet Collegium Musicæ de la page) : https://www.sorbonne-universite.fr/projets-proposes-en-2022-programme-instituts-et-initiatives Contact: Christophe d'Alessandro : christophe.dalessandro@sorbonne-universite.fr
| |||
6-10 | (2022-04-08) Postdocs at IMT Atlantique, Brest, France L'équipe RAMBO de IMT Atlantique, en collaboration avec Smart Macadam, recherche des candidats pour deux postdocs de 18-24 mois basés à Nantes sur les sujets suivants: automatique-de-la-parole-intelligence-artificielle-cdd-18-mois intelligence-artificielle-cdd-18-mois Mihai ANDRIES
Enseignant-chercheur Équipe RAMBO IMT Atlantique Brest, France
| |||
6-11 | (2022-04-10) PhD thesis positions, LaBRI, Bordeaux, France Vocal biomarkers collected through conversational agents for diagnosis assistance and follow-up of sleep and mental disorders https://emploi.cnrs.fr/Offres/Doctorant/UMR5800-MAGHIN-017/Default.aspx The SANPSY and the Labri teams have demonstrated their ability to identify new vocal biomarkers to measure excessive daytime sleepiness both subjectively and objectively in patients suffering from sleep disorders [1]–[5]. SANPSY demonstrated the validity of autonomous numeric solutions (I.e. smartphone based virtual agents) to diagnose sleep/mental disorders in the general population [6]– [10]. We plan now to develop new virtual agents collecting biomarkers (i.e. from speech) in our healthy subjects and patients cohorts for diagnostic, treatment and follow-up (ADDICTAQUI, KANOPEE and AUTONOMHEALTH(PEPR) projects). The PhD thesis project “Vocal biomarkers collected through conversational agents for diagnosis assistance and follow-up of sleep and mental disorders“ relies on 4 stages: 1) developing new virtual agents to collect vocal markers: The objective is to design new scenarios targeting behavioral interventions to improve fatigue, mood and excessive daytime sleepiness. Moreover, the scenarios will be designed so that the agent interacts with the subject in order to engage a discussion (spontaneous speech). This will lead to more ecological conditions that should increase the acceptability. 2) switching from high-quality controlled recordings made at the hospital to in-the-field unsupervised recordings using smartphones. Our current vocal biomarkers are defined using a reading task and using high-quality microphones. The new interaction scenarios from task 1) will lead us to record spontaneous speech with smartphone microphones. This task will tackle the differences in recording conditions and their impact on our feature extraction pipeline. 3) verifying the relevance of the existing vocal markers when used with the new data and propose new features that could be used as high-level biomarkers such as lexical, syntactic and semantic cues. Our features will have to be adapted to consider the versatile nature of spontaneous discourse which is a completely different speaking style from read speech. Spontaneous speech will however provide additional cues that could be used as high-level biomarkers such as lexical, syntactic and semantic markers. 4) studying the sensitivity and specificity of the selected biomarkers on diagnostic and follow up of symptoms and disorders with respect to other medical measures. This final part of the PhD project will be addressed jointly by LaBRI and SANPSY and includes the clinical validation of the proposed approaches. References: [1] V. P. Martin, G. Chapouthier, M. Rieant, J.-L. Rouas, and P. Philip, ‘Using reading mistakes as features for sleepiness detection in speech’, in 10th international conference on speech prosody 2020, Tokyo, Japan, May 2020, pp. 985–989. [Online]. Available: https://hal.archives- ouvertes.fr/hal-02495149 [2] V. P. Martin, J.-L. Rouas, J.-A. Micoulaud-Franchi, and P. Philip, ‘The objective and subjective sleepiness voice corpora’, in 12th edition of its language resources and evaluation conference., Marseille, France, May 2020, pp. 6525–6533. [Online]. Available: https://hal.archives- ouvertes.fr/hal-02489433 [3] V. P. Martin, J.-L. Rouas, and P. Philip, ‘Détection de la somnolence dans la voix : nouveaux marqueurs et nouvelles stratégies’, Trait. Autom. Lang., vol. 61, no. 2, p. 24, 2020. [4] V. P. Martin, J.-L. Rouas, F. Boyer, and P. Philip, ‘Automatic Speech Recognition Systems Errors for Objective Sleepiness Detection Through Voice’, in Interspeech 2021, Aug. 2021, pp. 2476– 2480. doi: 10.21437/Interspeech.2021-291. [5] V. P. Martin, J.-L. Rouas, J.-A. Micoulaud-Franchi, P. Philip, and J. Krajewski, ‘How to Design a Relevant Corpus for Sleepiness Detection Through Voice?’, Front. Digit. Health, vol. 3, p. 124, 2021, doi: 10.3389/fdgth.2021.686068. [6] L. Dupuy, J.-A. Micoulaud-Franchi, and P. Philip, ‘Acceptance of virtual agents in a homecare context: Evaluation of excessive daytime sleepiness in apneic patients during interventions by continuous positive airway pressure (CPAP) providers’, J. Sleep Res., vol. n/a, no. n/a, p. e13094, 2020, doi: https://doi.org/10.1111/jsr.13094. [7] L. Dupuy et al., ‘Smartphone-based virtual agents and insomnia management: A proof-of-concept study for new methods of autonomous screening and management of insomnia symptoms in the general population’, J. Sleep Res., p. e13489, Sep. 2021, doi: 10.1111/jsr.13489. [8] P. Philip et al., ‘Trust and acceptance of a virtual psychiatric interview between embodied conversational agents and outpatients’, Npj Digit. Med., vol. 3, no. 1, Art. no. 1, Jan. 2020, doi: 10.1038/s41746-019-0213-y. [9] P. Philip et al., ‘Virtual human as a new diagnostic tool, a proof of concept study in the field of major depressive disorders’, Sci. Rep., vol. 7, 2017. [10] P. Philip, S. Bioulac, A. Sauteraud, C. Chaufton, and J. Olive, ‘Could a virtual human be used to explore excessive daytime sleepiness in patients?’, Presence Teleoperators Virtual Environ., vol. 23, no. 4, pp. 369–376, 2014. Work environment: The PhD student will be hosted at LaBRI in the Image and Sound (I&S) department with frequent visits to SANPSY where he/she will interact with the clinicians and the designers of the virtual agents. The I&S department conducts research in acquisition, processing, analysis, modeling, synthesis and interaction of audiovisual media. It works on the entire acquisition chain from data collection to information extraction or restitution of digital data with the user at the center of the chain. The spectrum of manipulated data is very wide: 2D and 3D images, video, speech, music, 3D data, EEG, pysiological data, etc. The different steps of the processing chain integrate modeling phases for analysis or synthesis. The targeted application domains are: health, medical, education, gaming, etc. The SANPSY unit has a recognized expertise in sleep restriction studies and in the evaluation of countermeasures to sleep deprivation. The team is also specialized in sleep disorders, especially obstructive sleep apnea diagnostic and treatment. The SANPSY unit is located on the neuro-psychopharmacological research platform (PRNPP). This platform is recognized nationally and internationally for its expertise in clinical research, simulation and virtual reality. It has been labeled IBISA in 2015. In 2011, SANPSY obtained an EquipEx project (PHENOVIRT) that aimed to improve phenotyping using simulation and virtual reality technologies. Part of this project, SANPSY has initiated, in particular, the development of Embodied Conversational Agents (virtual doctors and patients). Several scenarios for the diagnosis of drowsiness, depression and addiction to tobacco and alcohol have already been developed and tested in patients. Jean-Luc ROUAS CNRS Researcher Bordeaux Computer Science Research Laboratory (LaBRI) 351 Cours de la libération - 33405 Talence Cedex - France T. +33 (0) 5 40 00 35 28 www.labri.fr/~rouas
| |||
6-12 | (2022-04-15) Doc ou Postdoc au LIG, Grenoble, France Date limite de candidature : 30 avril 2022
Sujet de thèse ou de postdoctorat dans le cadre du projet Popcorn (projet collaboratif avec deux entreprises)
encadrée par Benjamin Lecouteux, Gilles Sérasset et Didier Schwab (Laboratoire d’Informatique de Grenoble, Groupe d’Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)
Titre : Peuplement OPérationnel de bases de COnnaissances et Réseaux Neuronaux
Le projet aborde le problème de l’enrichissement semi-automatisé d’une base de connaissance au travers de l’analyse automatique de textes. Afin d’obtenir une innovation de rupture dans le domaine du Traitement Automatique du Langage Naturel (TALN) pour les clients sécurité et défense, le projet se focalise sur le traitement du français (même si les approches retenues seront par la suite généralisables à d’autres langues). Les travaux aborderont différents aspects :
● L’annotation automatique de documents textuels par la détection de mentions d’entités présentes dans la base de connaissance et leurs désambiguïsation sémantique (polysémie, homonymie) ;
● La découverte de nouvelles entités (personnes, organisations, équipements, événements, lieux), de leurs attributs (âge d’une personne, numéro de référence d’un équipement, etc.), et des relations entre entités (une personne travaille pour une organisation, des personnes impliquées dans un événement, ...). Une attention particulière sera donnée au fait de pouvoir s’adapter souplement à des évolutions de l’ontologie, la prise en compte de la place de l’utilisateur et de l’analyste pour la validation/capitalisation des extractions effectuées.
Le projet se focalise autour des trois axes de recherches suivants :
● Génération de données synthétiques textuelles à partir de textes de référence ;
● La reconnaissance des entités d’intérêt, des attributs associés et des relations entre les entités.
● La désambiguisation sémantique des entités (en cas d’homonymie par exemple)
Profil recherché:
- Solide expérience en programmation & machine learning pour le Traitement Automatique de Langues (TAL), notamment l’apprentissage profond
- Master/Doctorat Machine Learning ou informatique, une composante TAL ou linguistique computationnelle sera un plus apprécié
- Bonne connaissance du français
Détails pratiques:
- Début de la thèse rentrée 2022
- Contrat doctoral à temps plein au LIG (équipe Getalp) pour 3 ans (salaire: min 1768€ brut mensuel)
- ou Contrat postdoctoral à temps plein au LIG (équipe Getalp) pour 20 mois (salaire: min 2395€ brut mensuel)
Environnement scientifique:
Comment postuler ?
Les candidatures doivent contenir : CV + lettre/message de motivation + notes de master + lettre(s) de recommandations; et être adressées à Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr), Gilles Sérasset (gilles.serasset@univ-grenoble-alpes.fr) et Didier Schwab (Didier.Schwab@univ-grenoble-alpes.fr)
| |||
6-13 | (2022-04-18) PhD studentship at the University of Edinburgh,UK Hi all,
Here is an offer for a PhD studentship in modelling the articulation of spoken utterances at the University of Edinburgh
The PhD work will be to implement computational models of human speech articulation planning. It will involve the development of software for testing theoretical assumptions, along with tests of software output. The work combines phonetic and phonology aspects, speech technology and motor control theory with programming and software development.
The deadline to apply is 15th May 2022.
Information about eligibility and the application process may be found at:
https://www.ed.ac.uk/ppls/linguistics-and-english-language/prospective/postgraduate/funding-research-students/erc-phd-studentship-articulation-spoken-utterances
Contacts:
- Alice Turk: a.turk@ed.ac.uk
- Benjamin Elie: benjamin.elie@ed.ac.uk
| |||
6-14 | (2022-04-15) PhD chez Orange, France Orange recrute un.e doctorant.e sur le sujet 'Deep learning pour le traitement conjoint du langage naturel et des connaissances'. L’objectif de la thèse est de proposer des solutions pour mutualiser le traitement de tâches de compréhension et génération du langage naturel. Il s’agira ainsi d’étudier la fusion progressive de diverses tâches mêlant langage naturel et langage(s) formel(s) de représentation ou manipulation de connaissances. Le contexte d’application sera tout d’abord celui d’énoncés isolés, puis celui de dialogues humain-machine où l’historique de discussion doit être pris en compte.
Détails et candidature via Orange Jobs : https://orange.jobs/jobs/offer.do?joid=111967&lang=FR
| |||
6-15 | (2022-04-17) PhD at University of Zurich, Switzerland Human knowledge is inherently multi-modal, and it is more than just a collection of isolated pieces of information, irrespective of the form of expression. Instead, it emerges from the interconnectedness of all of these information fragments. Knowledge graphs are a powerful way of capturing such interconnected knowledge. Such graphs are effective for storing and relating information that can easily be expressed in textual form, by assigning a simple text label to every node in a graph or relating them to literals represented using strings or blobs. However, they so far fail to capture the richness of information that is not easily expressed as a short piece of text.
| |||
6-16 | (2022-04-19) Postdoctoral Position at Columbia University in The City of New York, NY, USA Postdoctoral Position - Machine Learning and Digital Twins, Columbia University in The City of New York.
| |||
6-17 | (2022-04-20) PhD grant at the University of Edinburg, UK The University of Edinburghis looking for a PhD candidate to work on modelling the articulation of spoken utterances as part of Alice Turk's Advanced ERC grant. The 4 year PhD studentship at the University of Edinburgh will be fully funded by the grant. We are looking for candidates with decent programming skills, and an interest in speech analysis and modelling:
| |||
6-18 | (2022-04-23) Doctoral position : Acoustic to Articulatory Inversion by using dynamic MRI images, INRIA, Nancy, France Doctoral position : Acoustic to Articulatory Inversion by using dynamic MRI images Loria “Lorraine Research Laboratory in Computer Science and its Applications” is a research unit common to CNRS, the Université de Lorraine and INRIA. Loria gathers 450 scientists and its missions mainly deal with fundamental and applied research in computer sciences, especially the MultiSpeech Team which focuses automatic speech processing, audiovisual speech and speech production. IADI is a research unit common to Inserm the Université de Lorraine whose specialty is developing various techniques and methods to improve imaging of moving organs via the acquisition of MR images.
This PhD project founded by LUE (Lorraine Université d’Excellence) associates the Multispeech team and the IADI laboratory.
Start date is (expected to be) 1st October 2022 or as soon as possible thereafter.
Supervisors Yves Laprie, email yves.laprie@loria.fr Pierre-André Vuissoz, email pa.vuissoz@chru-nancy.fr
The project
Articulatory synthesis mimics the speech production process by first generating the shape of the vocal tract from the sequence of phonemes to be pronounced, then the acoustic signal by solving the aeroacoustic equations. Compared to other approaches to speech synthesis which offer a very high level of quality, the main interest is to control the whole production process, beyond the acoustic signal alone. The objective of this PhD is to succeed in the inverse transformation, called acoustic to articulatory inversion, in order to recover the geometric shape of the vocal tract from the acoustic signal. A simple voice recording will allow the dynamics of the different articulators to be followed during the production of the sentence. Beyond its interest in terms of scientific challenge, articulatory acoustic inversion has many potential applications. Alone, it can be used as a diagnostic tool to evaluate articulatory gestures in an educational or medical context.
Description of work
The objective is the inversion of the acoustic signal to recover the temporal evolution of the medio-sagittal slice. Indeed, dynamic MRI provides two-dimensional images in the medio-sagittal plane at 50Hz of very good quality and the speech signal acquired with an optical microphone can be very efficiently deconstructed with the algorithms developed in the MultiSpeech team (examples available on https://artspeech.loria.fr/resources/). We plan to use corpora already acquired or in the process of being acquired. These corpora represent a very large volume of data (several hundreds of thousands of images) and an approach for tracking the contours of articulators in MRI images which gives very good results was developed to process corpora. The automatically tracked contours can therefore be used to train the inversion. The goal is to perform the inversion using the LSTM approach on data from a small number of speakers for which sufficient data exists. This approach will have to be adapted to the nature of the data and to be able to identify the contribution of each articulator. In itself, successful inversion to recover the shape of the vocal tract in the medio-sagittal plane will be a remarkable success since the current results only cover a very small part of the vocal tract (a few points on the front part of the vocal tract). However, it is important to be able to transpose this result to any subject, which raises the question of speaker adaptation, which is the second objective of the PhD.
What we offer
Supervisors Yves Laprie, email yves.laprie@loria.fr Pierre-André Vuissoz, email pa.vuissoz@chru-nancy.fr
Application Your application including all attachments must be in English and submitted electronically by clicking APPLY NOW below. Please include:
log into Inria’s recruitment system (https://jobs.inria.fr/public/classic/en/offres/2022-04654) in order to apply to this position.
| |||
6-19 | (2022-04-26) Position of University Assistants (prae doc), University of Vienna, Austria
| |||
6-20 | (2022-04-30) 3 PhD fellowships at the University of Copenhaguen, Denmark 3 PhD fellowships in applied Machine Learning, Information Retrieval and Natural Language Processing The Information Retrieval Lab of the Department of Computer Science at the University of Copenhagen (DIKU) is offering 3 fully funded PhD Fellowships in applied Machine Learning, Information Retrieval, and Natural Language Processing, commencing 1 September 2022 or as soon as possible thereafter.
The fellows will conduct research, having as starting point the following broad research areas:
The deadline for applications is 19 May 2022, 23:59 GMT +2.
| |||
6-21 | (2022-05-06) PhD students, Postdoctoral Researchers and R&D Engineers at Telecom Paris, Palaiseau, France We have multiple openings for PhD students, Postdoctoral Researchers and R&D Engineers at Télécom Paris, Institut polytechnique de Paris, in the “Signal, Statistics and Learning (S2A) team.
All positions are located at Telecom Paris, 19 place Marguerite Perey, 91120 Palaiseau, France.
Start of the positions: October/November 2022 (for PhDs/Engineer), January 2023 for PostDoc
Subject: The positions will be a part of the ERC Advanced (2022) – HI-Audio (Hybrid and Interpretable Deep neural audio machines) project, which aims at building hybrid deep approaches combining parameter-efficient and interpretable models with modern resource-efficient deep neural architectures with applications in speech/audio scene analysis, music information retrieval and sound transformation and synthesis.
The potential topics include (and are not limited to): - Deep generative models, adversarial learning - Attention-based models and curriculum learning - Statistical/deterministic audio models (signal models, sound propagation models,…) - Music Information Retrieval software platform development (R&D Engineer position)
Candidate Profile: - For the Phd positions: A masters degree in applied mathematics, datascience/computer science or speech/audio/music processing is required. - For the Postdoc position: PhD degree and publications in theory or applications of machine learning, generative modelling, discrete optimal transport or signal processing, ideally with applications to Speech/Audio/Music signals. - Master internship positions will also be open in early 2023.
Télécom Paris, and the S2A team:
The S2A team gathers 18 permanent faculties covering a wide variety of research topics including Statistics, Probabilistic modeling, Machine learning, Data science, Audio and social signal processing. On the overall, Télécom Paris’ research counts 19 research teams and covers various domains in computer science and networks, applied mathematics, electronics, image, data, signals and economic and social sciences. Télécom Paris (https://www.telecom-paris.fr/en/home) is a member of IMT (Institut Mines-Télécom), and is a founding member of the Institut Polytechnique de Paris (IP Paris, https://www.ip-paris.fr/en), a world-class scientific and technological institution which is a partnership between five prestigious French engineering schools with HEC as a key partner.
Application: - In the application, please send a resume, a motivation letter (and full transcript grades for Phd/Engineer positions) to Gaël Richard, firstname.lastname@telecom-paris.fr. At least one reference letter will be asked in a second step.
| |||
6-22 | (2022-05-15) PhD position, Loria, Nancy Multichannel Speech Enhancement for Patients with Auditory Neuropathy Spectrum Disorders Location: LORIA, MULTISPEECH team, Nancy Supervisors: Romain Serizel (Maître de Conférences, Université de Lorraine), Paul Magron (Chargé de Recherche, INRIA). This PhD fits within the scope of the ANR project 'REFINED' involving the Multispeech research team in (LORIA), Nancy, the Laboratory of Embedded Artificial Intelligence in CEA (List) in Paris, and the Hearing Institute in Paris. Context Worldwide, around 466 million people currently suffer from a hearing loss. To remedy the loss of hearing sensitivity, portable hearing aids have been designed for almost a century. Regardless of the recent advances in audio signal processing integrated in current hearing aids models, people suffering from Auditory Neuropathy Spectrum Disorders enjoy little or no benefit from current hearing aids [1]. Contrary to regular hearing losses, Auditory Neuropathy Spectrum Disorders impair the processing of temporal information without necessarily affecting auditory sensitivity. This can have a particularly dramatic impact in scenarios where the speech of interest is present together with some background noise or with one or several concurrent speaker(s). Current speech enhancement systems are usually trained on generic corpora, and they are designed to optimize some cost between the target (known) speech and the output of the system, which is estimated from the mixture, such as the mean squared error [2] or the speech-to-distortion ratio [3]. The trained system is then evaluated using a criterion that is designed to reflect the speech perception from people without hearing losses [4]. Yet, the main need of subjects with Auditory Neuropathy Spectrum Disorders, shared with ageing subjects who experience central auditoryprocessing difficulties, is not to restore audibility but to improve their speech intelligibility, particularly in noisy environments, by compensating for the deterioration of acoustic cues that rely on temporal precision [5]. Objectives Based on clinical studies performed at the Hearing Institute within the project, the main goal of this PhD is to define new cost functions to be optimized by the speech processing algorithms that are more relevant for subjects with Auditory Neuropathy Spectrum Disorders than generic losses used in current algorithms. We will pay particular attention to the algorithms’ ability to help volunteers in scenarios with multiple potential target sources that are spatially distributed in a room. We will derive the speech enhancement filters aiming to extract not only speech, but also additional cues such as speech contour or timbre. In a latter step, the model will be adapted under light human supervision in order to reduce the burden of the usual iterative “handcrafted” adjustments and repeated visits with a specialist clinician to fit the hearing aid to individual needs. Profile • Strong background in audio signal processing or machine learning • Excellent programming skills • Excellent English writing and speaking skills Application Upload your application on ADUM (https://www.adum.fr/as/ed/voirproposition.pl? site=adumR&matricule_prop=43498#version) with the following: • CV • Cover letter • Recommendation letter • M1-M2 note transcript • Master thesis, if available References [1] Berlin, C. I. et al. Multi-site diagnosis and management of 260 patients with auditory neuropathy/dys-synchrony (auditory neuropathy spectrum disorder). Int J Audiol 49, 30-43 (2010). [2] Doclo, S., Spriet, A., Wouters, J. & Moonen, M. Frequency-domain criterion for the speech distortion weighted multichannel Wiener filter for robust noise reduction. Speech Communication 49, 636-656 (2007). [3] Luo, Y., et al. FaSNet: Low-latency adaptive beamforming for multi-microphone audio processing. 2019 IEEE automatic speech recognition and understanding workshop (2019). [4] Vincent, E., Rémi G., and Cédric F. Performance measurement in blind audio source separation. IEEE transactions on audio, speech, and language processing 14.4, 1462-1469 (2006). [5] https://claritychallenge.github.io/clarity_CC_doc/docs/cpc1/cpc1_intro
| |||
6-23 | (2022-05-20) Docteur R&D informatique, NLP - F/H (CDD), Lunii, Paris LUNII PARIS · TÉLÉTRAVAIL HYBRIDE
Docteur R&D informatique, NLP - F/H (CDD)
| |||
6-26 | (2022-06-01) Research positions @ L3i Lab, La Rochelle, France Cross-lingual and cross-domain terminology alignment
Interested in joining a young NLP group of 10+ people located in a historical town by the Atlantic Ocean? And walk 10 minutes from the lab to the beach. We have open positions in the context of recent Horizon 2020 projects: Embeddia and NewsEye as well as related local projects. In the last 2 years, we have among others published long papers in CORE A* and A conferences such as ACL, JCDL, CoNLL, ICDAR, COLING, ICADL, etc. Location: L3i laboratory, La Rochelle, France Duration: 2 years (1+1), with possible further extension Net salary range: 2100€-2300 € monthly Context: H2020 Embeddia project and regional project Termitrad Start: September 2022 (tentatively)
To address this very project, the project team will consist of senior staff, 2 post-doctoral researchers and 2-3 PhD students, one of which is jointly supervised in the Józef Stefan Institute in Ljubljana, coordinator of H2020 Embeddia. In this context, you will first be in charge of building a state of the art of existing related approaches, tools and resources, then to conduct further research and experiments, as well as participate in the supervision of PhD students.
- - PhD in statistical NLP, IR, or ML, ideally with further postdoctoral experience - - proven record of high-level publications in one or more of those fields - - fluency in written and spoken English (French language skills are welcome but unnecessary)
Applications including a CV and a one-page research statement discussing how the candidate's background fits requirements and topic are to be sent to by email to antoine.doucet@univ-lr.fr, strictly with the subject 'Embeddia/Termitrad postdoc application'. Application deadline: 14 June 2022.
| |||
6-27 | (2022-06-01) Post-doc @L3i, La Rochelle, France -- Post-doctoral research position - L3i - La Rochelle France Title : Emotion detection by semantic analysis of the text in comics speech balloons
The L3i laboratory has one open post-doc position in computer science, in the specific field of natural language processing in the context of digitised documents.
Duration: 12 months (an extension of 12 months will be possible) Position available from: As soon as possible Salary: approximately 2150 € / month (net) Place: L3i lab, University of La Rochelle, France Specialty: Computer Science/ Document Analysis/ Natural Language Processing Contact: Jean-Christophe BURIE (jcburie [at] univ-lr.fr) / Antoine Doucet (antoine.doucet [at] univ-lr.fr)
Position Description The L3i is a research lab of the University of La Rochelle. La Rochelle is a city in the south west of France on the Atlantic coast and is one of the most attractive and dynamic cities in France. The L3i works since several years on document analysis and has developed a well-known expertise in ‘Bande dessinée”, manga and comics analysis, indexing and understanding. The work done by the post-doc will take part in the context of SAiL (Sequential Art Image Laboratory) a joint laboratory involving L3i and a private company. The objective is to create innovative tools to index and interact with digitised comics. The work will be done in a team of 10 researchers and engineers. The team has developed different methods to extract and recognise the text of the speech balloons. The specific task of the recruited researcher will be to use Natural Language Processing strategies to analyse the text in order to identify emotions expressed by a character (reacting to the utterance of another speaking character) or caused by it (talking to another character). The datasets will be collections of comics in French and English.
Qualifications Candidates must have a completed PhD and a research experience in natural language processing. Some knowledge and experience in deep learning is also recommended.
General Qualifications • Good programming skills mastering at least one programming language like Python, Java, C/C++ • Good teamwork skills • Good writing skills and proficiency in written and spoken English or French
Applications Candidates should send a CV and a motivation letter to jcburie [at] univ-lr.fr and antoine.doucet [at] univ-lr.fr. Applications will be considered from 9 June onwards, and until a candidate is hired
| |||
6-28 | (2022-06-07) Postdoctoral fellowship, Northwestern University, USA Postdoctoral fellowship in corpus phonetics / data science for speech, Northwestern University
Northwestern University requires all staff and faculty to be vaccinated against COVID-19, subject to limited exceptions. For more information, please visit our COVID-19 and Campus Updates website.
The Northwestern University campus sits on the traditional homelands of the people of the Council of Three Fires, the Ojibwe, Potawatomi, and Odawa as well as the Menominee, Miami and Ho-Chunk nations. We acknowledge and honor the original people of the land upon which Northwestern University stands, and the Native people who remain on this land today.
Northwestern University is an Equal Opportunity, Affirmative Action Employer of all protected classes, including veterans and individuals with disabilities. Women, racial and ethnic minorities, individuals with disabilities, and veterans are encouraged to apply. Click for information on EEO is the Law.
| |||
6-29 | (2022-06-07) PhD grant @ INRIA, France Inria is opening a fully funded PhD position on multimodal speech
| |||
6-30 | (2022-06-15) Ingénieur.e science des données et corpus, Laboratoire d’Informatique de Grenoble, France Ingénieur.e science des données et corpus – Laboratoire d’Informatique de Grenoble
Analyse, conception, mise en forme et diffusion des corpus vocaux et multimodaux du LIG et du LIDILEM
Poste à pourvoir : ingénieur - CDD Durée : 1 an (possibilité de prolongation) Début : à partir du 1er septembre 2022 Date limite de candidature : 30 juin 2022 Lieu : Laboratoire d’informatique de Grenoble – Équipe Getalp Domaine : Traitement Automatique des Langues et de la Parole
Contexte Le poste à pouvoir est soutenu par la Chaire Artificial Intelligence & Language de l'Institut MIAI Grenoble Alpes. MIAI est un centre d’excellence en intelligence artificielle qui vise à conduire des recherches au plus haut niveau, à proposer des enseignements attractifs pour les étudiant.e.s et les professionnel.le.s de tous les niveaux, à soutenir l'innovation dans les grandes entreprises, les PMEs et les startups et enfin à informer et interagir avec les citoyen.ne.s sur tous les aspects de l'IA. La personne recrutée sera hébergée au sein de l'équipe GETALP du Laboratoire d'Informatique de Grenoble (LIG), qui offre un cadre dynamique, international et stimulant pour mener des recherches pluridisciplinaires de haut niveau. L'équipe GETALP est hébergée dans un bâtiment moderne (IMAG) situé sur un campus paysager de 175 hectares qui a été classé huitième plus beau campus d'Europe par le magazine Times Higher Education en 2018.
Missions confiées
Vous travaillerez en étroite collaboration avec des doctorants, des stagiaires et des chercheurs du bassin Grenoblois de l’institut MIAI. Vous bénéficierez également des compétences et de l'environnement de recherche de 2 unités de recherche : le LIG (https://www.liglab.fr) et le LIDILEM (https://lidilem.univ-grenoble-alpes.fr/).
Compétences
Instructions pour postuler Les candidatures sont attendues jusqu'au 30 juin 2022. Veuillez envoyer votre CV + une lettre/message de motivation + les notes de vos études antérieures + des références pour une ou plusieurs lettres de recommandation potentielles à :
| |||
6-31 | (2022-06-10) Associate Teaching Professor @ University of Cambridge, Department of Engineering, Cambridge, UK Job opportunity: Associate Teaching Professor at the University of Cambridge, Department of Engineering
We're advertising for an Associate Teaching Professor who will be the Course Director of the Machine Learning and Machine Intelligence (MLMI) MPhil. The post will involve teaching and the post-holder can be research active e.g. they can start and run their own research group. The main expertise could be in any field related to the MPhil including: machine learning, machine intelligence, speech and language processing, signal processing, control, robotics, human-computer interaction, computer vision, and high performance computing.
Advert: https://www.jobs.cam.ac.uk/job/35215/
| |||
6-32 | (2022-06-10) Associate Teaching Professor at the University of Cambridge, Department of Engineering , UK Job opportunity: Associate Teaching Professor at the University of Cambridge, Department of Engineering
We're advertising for an Associate Teaching Professor who will be the Course Director of the Machine Learning and Machine Intelligence (MLMI) MPhil. The post will involve teaching and the post-holder can be research active e.g. they can start and run their own research group. The main expertise could be in any field related to the MPhil including: machine learning, machine intelligence, speech and language processing, signal processing, control, robotics, human-computer interaction, computer vision, and high performance computing.
Advert: https://www.jobs.cam.ac.uk/job/35215/
| |||
6-33 | (2022-06-11) Chaire de professeur junior au CNRS, France Le CNRS ouvre un poste attractif en Apprentissage Automatique pour le Traitement Automatique des Langues. Il s'agit d'une 'Chaire Professeur Junior', un type de poste créé cette année, qui offre un accès direct à un poste permanent de Directrice ou Directeur de Recherche CNRS au bout de 3 à 6 ans.
| |||
6-34 | (2022-06-12) Postdocs at the Speech Prosody Special Interest Group To the Speech Prosody Special Interest Group,
We are looking for two postdocs (2 years each in the first instance) to work on intonation and intonation pragmatics as part of SPRINT (sprintproject.io). We would appreciate it if you could share the information with your networks, especially because we have a short deadline for applications (19 June).
The job descriptions and other details can be found at the links below, but potential applicants can get in touch with me as well (using my Radboud address, amalia.arvaniti@ru.nl)
Please note that the previous links we sent don’t work, these are the new links and should work.
regards to all,
Amalia Arvaniti
| |||
6-35 | (2022-06-14) Multiple faculty positions at National Yang Ming Chiao Tung University (NYCU), Taiwan National Yang Ming Chiao Tung University (NYCU), one of the top-ranked universities in Taiwan, invites applications for multiple faculty positions (assistant, associate, full, and chair professors), in the Institute of Artificial Intelligence Innovation (IAII). IAII is part of the newly established Industry Academia Innovation School (IAIS) in NYCU, with major focuses on innovations in artificial intelligence (https://iais.nycu.edu.tw/). IAII/NYCU is located in Hsinchu Science Park, Taiwan’s “Silicon Valley”, wherein over two thirds of the CEOs and managers are NYCU graduates.
With the determination to facilitate industry–government–academia–research collaboration to drive the next-generation industry development, we are looking for strong candidates in the broader area of artificial intelligence, data science, security, information engineering, broadband communication, and Internet of Things. Applicants are expected to conduct outstanding research and be committed to teaching, in collaboration with world-class ICT industry partners.
Applicants should submit the following items: ● Cover letter ● Curriculum Vitae ● Research statement ● Teaching statement ● Publication list ● Three or more reference letters ● Any other PDF-formatted supporting materials (optional). Please address all inquiries and nominations to Prof. Wen-Huang Cheng, Director of the IAII via email (whcheng@nycu.edu.tw). -- Wen-Huang Cheng (鄭文皇)
Distinguished Professor,
Department of Electronics Engineering | Institute of Electronics
College of Electrical and Computer Engineering,
National Chiao Tung University (NCTU), Taiwan
Director,
NCTU Artificial Intelligence Graduate Program
Email: whcheng@nctu.edu.tw
Phone: +886-(0)3-5712121 ext 54289
| |||
6-36 | (2022-06-22) Doctoral Researcher, Institute of Linguistics, JWGoethe University, Frankfurt/Main, Germany The Johann Wolfgang Goethe University Frankfurt am Main is one of the largest universities in Germany with around 48,000 students and with about 5,000 employees. Founded in 1914 by Frankfurt citizens and since 2008 once again proud of its foundation status Goethe University possesses a high degree of autonomy, modernity and professional diversity. As a comprehensive university, the Goethe University offers a total of 16 departments on five campuses and more than 100 degree programs along with an outstanding research reputation.
The Institute of Linguistics at the Department of Modern Languages of Goethe University Frankfurt am Main offers a position in cotutelle with the Department of Translation and Language Sciences at Universitat Pompeu Fabra, Barcelona, in the project “Co-Speech Gestures and Prosody as Multimodal Markers of Information Structure” as a
Doctoral Researcher (E13 TV-GU, 65% part-time) starting October 1st 2022
funded for 3 years by the German Science Foundation (Deutsche Forschungsgemeinschaft DFG). The salary grade includes social benefits and is based on the job characteristics of the collective agreement applicable to Goethe University (TV-G-U). We offer a 3-year doctoral co-tutelle position to work within a collaborative team. The main focus of the doctoral research will be to assess the multimodal markers of IS in Catalan. The project will be run in close collaboration with the team assessing the multimodal markers of IS in German. The two teams involved are the Prosodic Studies Group at UPF (IP: Dr. Pilar Prieto) and the Phonology Lab at the Institute of Linguistics in Frankfurt (PI: Dr. Frank Kügler). The project is part of the DFG Priority Progamme 2329 “Visual Communication” (https://vicom.info). The ideal candidate has a strong background in linguistics and linguistic experimentation, is highly motivated and interested in the assessment of prosodic and gestural markers in language. Knowledge of Catalan will be required to run the experiments. To qualify for a doctoral position in Linguistics the candidate should hold a master’s degree in Linguistics, Philology, Psychology, or equivalent.
All required documents should be emailed as a pdf file (preferably as one document) to Frank Kügler (Kuegler@em.uni-frankfurt.de) and Pilar Prieto (pilar.prieto@upf.edu) up to July 15th, 2022. For more information, please contact Frank Kügler and Pilar Prieto.
The Goethe University is committed to a policy of providing equal employment opportunities for both men and women alike, and therefore encourages particularly women to apply for the position/s offered. Individuals with severe disability will be prioritized in case of equal qualification.
Prof. Dr. Frank Kügler
| |||
6-37 | (2022-06-22) 2 postdocs @Radbout University, Nijmegen, The Netherlands We are looking for two postdocs (2 years each in the first instance) to work on intonation and intonation pragmatics as part of SPRINT (sprintproject.io). We would appreciate it if you could share the information with your networks, especially because we have a short deadline for applications (19 June).
The job descriptions and other details can be found at the links below, but potential applicants can get in touch with me as well (using my Radboud address, amalia.arvaniti@ru.nl)
Please note that the previous links we sent don’t work, these are the new links and should work.
| |||
6-38 | (2022-06-23) Post-Doctoral/PhD position at Telecom-Paris Post-Doctoral/PhD position at Telecom-Paris on Deep learning approaches for social computing
*Place of work* Telecom Paris, Palaiseau (Paris outskirt)
*Starting date* From September 2022 (but can start later)
*Context* The PhD student/post-doctoral fellow will take part in the REVITALISE project, funded by ANR ( viRtual bEhaVioral skIlls TrAining for pubLIc SpEaking). The research activity will bring together the research topics of Prof. Chloé Clavel [Clavel] of the S2a [SSA] team at Telecom-Paris– social computing [SocComp] - and Dr. Mathieu Chollet [Chollet] from University of Glasgow – multimodal systems for social skills training, and Dr Beatrice Biancardi [Biancardi] – Social Behaviour Modelling from CESI Engineering School, Nanterre.
* Candidate profile* As a minimum requirement, the successful candidate should have: • A master degree in one or more of the following areas: human-agent interaction, deep learning, computational linguistics, affective computing, reinforcement learning, natural language processing, speech processing • Excellent programming skills (preferably in Python) • Excellent command of English
*How to apply* The application should be formatted as **a single pdf file** and should include: • A complete and detailed curriculum vitae • A cover letter • The contact of two referees
For the post-doctoral fellow position, additional documents are required:
• The defense and Phd reports • The contact of two referees
The pdf file should be sent to the three supervisors: Chloé Clavel, Beatrice Biancardi and Mathieu Chollet: chloe.clavel@telecom-paris.fr, bbiancardi@cesi.fr, mathieu.chollet@glasgow.ac.uk
Multimodal attention models for assessing and providing feedback on users’ public speaking ability
*Keywords* human-machine interaction, attention models, recurrent neural networks, Social Computing, natural language processing, speech processing, non-verbal behavior processing, multimodality, soft skills, public speaking
*Supervision* Chloé Clavel, Mathieu Chollet, Beatrice Biancardi
*Description* Oral communication skills are essential in many situations and have been identified as core skills of the 21st century. Technological innovations have enabled social skills training applications which hold great training potential: speakers’ behaviors can be automatically measured, and machine learning models can be trained to predict public speaking performance from these measurements and subsequently generate personalized feedback to the trainees. The REVITALISE project proposes to study explainable machine learning models for the automatic assessment of public speaking and for automatic feedback production to public speaking trainees. In particular, the recruited intern will address the following points: - identify relevant datasets for training public speaking and prepare them for model training - propose and implement multimodal machine learning models for public speaking assessment and compare them to existing approaches in terms of predictive performance. - integrate the public assessment models to produce feedback a public speaking training interface, and evaluate the usefulness and acceptability of the produced feedback in a user study The results of the project will help to advance the state of the art in social signal processing, and will further our understanding of the performance/explainability trade-off of these models.
The compared models will include traditional machine learning models proposed in previous work [Wortwein] and sequential neural approaches (recurrent networks) that integrate attention models as a continuation of the work done in [Hemamou_a], [Hemamou_b][BenYoussef]. The feedback production interface will extend a system developed in previous work [Chollet21].
Selected references of the team: [Hemamou_a] L. Hemamou, G. Felhi, V. Vandenbussche, J.-C. Martin, C. Clavel, HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews. in AAAI 2019 [Hemamou_b] Leo Hemamou;Arthur Guillon;Jean-Claude Martin;Chloe Clavel, Multimodal Hierarchical Attention Neural Network: Looking for Candidates Behaviour which Impact Recruiter’s Decision, IEEE Trans. of Affective Computing, Sept. 2021 [Ben-Youssef] Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim. Ue-hri: a new dataset for the study of user engagement in spontaneous human-robot interactions. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 464–472. ACM, 2017. [Wortwein] Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, and Stefan Scherer. 2015. Multimodal Public Speaking Performance Assessment. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15). Association for Computing Machinery, New York, NY, USA, 43–50. [Chollet21] Chollet, M., Marsella, S., & Scherer, S. (2021). Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback. Journal on Multimodal User Interfaces, 1-13.
Other references: [TPT] https://www.telecom-paristech.fr/eng/ [IMTA] https://www.imt-atlantique.fr/fr [SSA] http://www.tsi.telecom-paristech.fr/ssa/# [PACCE] https://www.ls2n.fr/equipe/pacce/ [Clavel] https://clavel.wp.imt.fr/publications/ [Chollet] https://matchollet.github.io/ [Biancardi] https://sites.google.com/view/beatricebiancardi -Rasipuram, Sowmya, and Dinesh Babu Jayagopi. 'Automatic multimodal assessment of soft skills in social interactions: a review.' Multimedia Tools and Applications (2020): 1-24. -Sharma, Rahul, Tanaya Guha, and Gaurav Sharma. 'Multichannel attention network for analyzing visual behavior in public speaking.' 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018. -Acharyya, R., Das, S., Chattoraj, A., & Tanveer, M. I. (2020, April). FairyTED: A Fair Rating Predictor for TED Talk Data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 01, pp. 338-345).
| |||
6-39 | (2022-06-27) PhD grant @ LIS, Marseille Candidat(e) pour une thèse de doctorat en informatique sur un projet collaboratif susceptible d'être financé par la DGA. La thèse se déroulera au sein de l'équipe R2I ( Recherche d'informations et interactions) du pôle Sciences des données du LIS (Marseille) Sujet de thèse de doctoratTitre : Génération automatique de résumés fluides de textes en français par apprentissage profondEncadrement : Prof. Patrice BELLOT (https://cv.archives-ouvertes.fr/patrice-bellot ; Université d’Aix-Marseille CNRS, LIS), Adrian CHIFU (https://adrianchifu.com ; Université d’Aix-Marseille CNRS, LIS)Période : octobre 2022 - septembre 2025 Mots clés : résumé automatique, fluidification textuelle, recherche d’information, traitement automatique des langues, apprentissage automatique, réseaux neuronauxContexte : Projet collaboratif susceptible d’être soutenu par la DGA entre :
Description du sujet :Le contexte du projetDevant la croissance exponentielle des volumes de données et particulièrement de la documentation de type texte (manuels, publications, sites internet, etc.), une solution est de permettre d’accéder facilement aux éléments essentiels, au travers de résumés des textes les plus pertinents dans le contexte utilisateur. Or à ce jour les résumés automatiques restent perfectibles, aussi bien du point de vue de la couverture informationnelle que de leur susceptibilité à créer de fausses informations ou encore de leur fluidité de la lecture, critère qui est la cible première de cette thèse. Le but du projet RAFFAL est d’améliorer les technologies automatiques (par IA) de résumés de documents en français selon l’angle des métriques qui les régissent en tant que fonction objective (apprentissage automatique de modèles) et mesure d’évaluation humaine. Par ailleurs, les algorithmes, modèles et jeux de données de nouvelle génération basés sur les technologies les plus récentes de d’apprentissage profond (notamment de type Transformeret modèles séquence à séquence) sont pratiquement exclusivement en langue anglaise et doivent être testées et adaptées au français. Le domaine du résumé automatique est confronté depuis longtemps au manque de métriques d’évaluationautomatique de la qualité des résumés fournis suffisamment fiables ; ce manque de métriques d’évaluation est un frein majeur à l’industrialisation et au déploiement des technologies de résumés automatiques pour lesquels des critères de confiance et de pilotage sont indispensables. Plan de travailLe plan de travail comprend deux volets majeurs. Le premier correspond à une étude des propriétés et des limites des métriques existantes et à leur adaptation au français. Le second correspond à la modification des fonctions objectives utilisées pour l’entraînement des modèles selon les métriques adaptées et de nouvelles métriques.La thèse que nous proposons attaquera tout d’abord la définition de la fluidité. Les mesures de fluidité et de qualité d’un résumé existantes, généralement pour l’anglais, seront étudiées et adaptées à la langue française. Il s’agit par exemple de revisiter le lien entre les mesures existantes, les différentes dimensions qualitatives d’un résumé et leur implémentation au sein d’une architecture neuronale notamment de type séquence à séquence (profondeur des représentations et niveaux d’abstraction, mécanismes attentionnels...). Les ressources linguistiques et les corpus de textes utiles devront être identifiés. Des évaluateurs humains pourront être impliqués et nous devons à la fois étudier des mesures d’accord inter-annotateurs et analyser leurs profils, selon leur niveau de connaissance de la thématique du résumé par exemple. Une évaluation en ligne pourrait permettre d’identifier les points complexifiant la lecture et conduire à de nouvelles métriques qui influeront à leur tour la création dynamique d’un résumé (approche par renforcement, réécriture alternative, complétion informationnelle par extraction d’information ou annotation sémantique). La fluidité sera étudiée en tant que fonction objectif pour l’optimisation du « compromis » entre la perte informationnelle et les phénomènes d’hallucination (collaboration avec une autre thèse effectuée en parallèle au sein du laboratoire ISIR de Paris Sorbonne Université). Nous allons étudier l’équilibre entre la fluidité, d’une part, et la qualité et la complétude informationnelles, d’autre part (ex. : le « compromis » entre la précision et le rappel, pour les résultats d’un moteur de recherche). Cette phase nécessitera l’identification des informations essentielles, des éléments textuels centraux des textes à résumer et pourra être approchée par le biais de systèmes questions-réponses. Enfin, la fluidité d’un résumé étant dépendante du contexte, il est nécessaire d’étudier son caractère subjectif, notamment en tenant compte des types de texte (actualités, prises de position, interviews avec dialogues, articles scientifiques...) et des priorités du résumé (couverture des points de vue et des opinions sur un sujet sans perte de l’identification des sources, synthèse factuelle autour d’un événement...). Chaque étape fera l’objet d’expérimentations sur des données et problématiques réelles, en collaboration avec le partenaire industriel du projet. Les propositions de la thèse s’inscriront dans le cadre de la science ouverte (publications, données et modèles lorsque cela est possible, codes source). Profil de candidature :Parcours antérieur : Master 2 Informatique orienté Recherche en IA ou en TAL ou équivalent Langue : Français (niveau minimum C1) Langage de programmation : Python Connaissances et compétences souhaitées : - apprentissage automatique statistique, architectures neuronales, transformeurs - classification automatique de documents - annotation de corpus - outils et ressources du Traitement Automatique des Langues - modèles de langue et représentations textuelles - résumé automatique, génération de textes, simplification de textes - recherche d’information et questions-réponses
| |||
6-40 | (2022-07-07) Two internship positions @ Naver Labs Europe Naver Labs Europe (https://europe.naverlabs.com/) is currently offering 2 internship positions related to Speech Processing.
More details on both job offers can be found here:
| |||
6-41 | (2022-07-17) Two PhD positions at Quality and Usability Lab, Technical University of Berlin, Germany Two PhD positions at Quality and Usability Lab, Technical University of Berlin, Germany
Two PhD positions at Quality and Usability Lab, Technical University of Berlin, Germany
We are looking to recruit two Doctoral Researchers to join Quality and Usability Lab, at Technical University of Berlin, Germany. Both positions are research assistant positions (TVL-E13) and depending on follow up funding to be continued until doctorate thesis can be finished. The Quality and Usability Lab is part of TU Berlin’s Faculty IV and deals with the design and evaluation of human-machine interaction, in which aspects of human perception, technical systems and the design of interaction are the subject of our research. We focus on self-determined work in an interdisciplinary and international team; for this we offer open and flexible working conditions that promote scientific and personal exchange and are a prerequisite for excellent results.
The research is in the area of the assessment of the quality of speech services using a crowdsourcing approach. The aim of the research is to analyze how crowdsourcing-based listening-only and conversational speech quality evaluation experiments can be set up in order to provide valid and reliable results, and how the characteristics of the test participants, the test environment and the playback system can be assessed in online tests. It will be assessed which differences are to be expected between crowdsourcing and laboratory-based speech quality evaluation, and how these differences influence the development of instrumental speech quality prediction models. The results are expected to influence methods for speech quality assessment in crowdsourcing, as they are summarized in ITU-T Recommendation P.808. This project is funded by the Deutsche Forschungsgemeinschaft, DFG, and is limited to a duration until January 31, 2024 (compensation TVL E13). A subsequent ongoing employment is supported if the PhD cannot be finished in running time of the project
The position is open to do research in the field of speech signals analysis, and the assessment of speech quality in different (mobile and fixed) networks. Therefore, speech signals are to be analyzed in listening-only as well as conversational situations in order to get indications or the perceived quality. Based on these analysis, signal-based and parametric models for the estimation of speech quality can be extended and integrated. One focus of the present research may be the evaluation of new speech codecs in different network scenarios. The models are to be validated based on subjective listening and conversation tests.
The initial funding is available from September 1st, 2022 and is limited until April 30th, 2023; however, the outcomes of the research should be used to support the preparation of a new project application, and may also become a foundation for a later PhD thesis. A subsequent position as a research assistant from the project funds would be possible if the funds were approved.
2.1 Tasks
2.2. Requirements
Application For both positions, please send the following documents, bundled in a single PDF file, to Prof. Dr.-Ing. Sebastian Möller bewerbung@qu.tu-berlin.de: Letter of application, curriculum vitae, copies of certificates, job references. Please also specify for which position you are applying. To ensure equal opportunities between women and men, applications by women with the required qualifications are explicitly desired. Qualified individuals with disabilities will be favored.
| |||
6-42 | (2022-07-28) PhD Position : Naver Labs Europe (France) and FBK Trento (Italy)PhD Position : Naver Labs Europe (France) and FBK Trento (Italy) start Nov 2022Have you recently completed or expect very soon an MSc or equivalent degree in computer science, artificial intelligence, computational linguistics, engineering, or a related area? Are you interested in carrying out research on Speech-to-Speech Translation during the next few years? Are you excited to spend a part of your life in 2 pleasant alpine cities in France (Grenoble) and Italy (Trento) ?
WE ARE LOOKING FOR YOU!!!
The Machine Translation (MT) group at Fondazione Bruno Kessler (Trento, Italy) in conjunction with Naver Labs Europe (Grenoble, France) are pleased to announce the availability of the following fully-funded Ph.D. position at the Doctorate Program in Industrial Innovation of the University of Trento and Fondazione Bruno Kessler.
PhD topic: Unified Foundation models for Speech-to-Speech Translation
The deadline for application: August 23rd.
More details here: http://tinyurl.com/PhD-FBK-NLE
| |||
6-43 | (2022-07-18) PhD in ML/NLP @IMAG, Grenoble, France PhD in ML/NLP – Efficient, Fair, robust and knowledge informed self-supervised learning for speech processing Starting date: November 1st, 2022 (flexible) Application deadline: September 5th, 2022 Interviews (tentative): September 19th, 2022 Salary: ~2000€ gross/month (social security included) Mission: research oriented (teaching possible but not mandatory)
Keywords: speech processing, natural language processing, self-supervised learning, knowledge informed learning, Robustness, fairness
CONTEXT The ANR project E-SSL (Efficient Self-Supervised Learning for Inclusive and Innovative Speech Technologies) will start on November 1st 2022. Self-supervised learning (SSL) has recently emerged as one of the most promising artificial intelligence (AI) methods as it becomes now feasible to take advantage of the colossal amounts of existing unlabeled data to significantly improve the performances of various speech processing tasks.
PROJECT OBJECTIVES Recent SSL models for speech such as HuBERT or wav2vec 2.0 have shown an impressive impact on downstream tasks performance. This is mainly due to their ability to benefit from a large amount of data at the cost of a tremendous carbon footprint rather than improving the efficiency of the learning. Another question related to SSL models is their unpredictable results once applied to realistic scenarios which exhibit their lack of robustness. Furthermore, as for any pre-trained models applied in society, it is important to be able to measure the bias of such models since they can augment social unfairness.
The goals of this PhD position are threefold: - to design new evaluation metrics for SSL of speech models ; - to develop knowledge-driven SSL algorithms ; - to propose methods for learning robust and unbiased representations.
SSL models are evaluated with downstream task-dependent metrics e.g., word error rate for speech recognition. This couple the evaluation of the universality of SSL representations to a potentially biased and costly fine-tuning that also hides the efficiency information related to the pre-training cost. In practice, we will seek to measure the training efficiency as the ratio between the amount of data, computation and memory needed to observe a certain gain in terms of performance on a metric of interest i.e., downstream dependent or not. The first step will be to document standard markers that can be used as robust measurements to assess these values robustly at training time. Potential candidates are, for instance, floating point operations for computational intensity, number of neural parameters coupled with precision for storage, online measurement of memory consumption for training and cumulative input sequence length for data.
Most state-of-the-art SSL models for speech rely on masked prediction e.g. HuBERT and WavLM, or contrastive losses e.g. wav2vec 2.0. Such prevalence in the literature is mostly linked to the size, amount of data and computational resources injected by the company producing these models. In fact, vanilla masking approaches and contrastive losses may be identified as uninformed solutions as they do not benefit from in-domain expertise. For instance, it has been demonstrated that blindly masking frames in the input signal i.e. HuBERT and WavLM results in much worse downstream performance than applying unsupervised phonetic boundaries [Yue2021] to generate informed masks. Recently some studies have demonstrated the superiority of an informed multitask learning strategy carefully selecting self-supervised pretext-tasks with respect to a set of downstream tasks, over the vanilla wav2vec 2.0 contrastive learning loss [Zaiem2022]. In this PhD project, our objective is: 1. continue to develop knowledge-driven SSL algorithms reaching higher efficiency ratios and results at the convergence, data consumption and downstream performance levels; and 2. scale these novel approaches to a point enabling the comparison with current state-of-the-art systems and therefore motivating a paradigm change in SSL for the wider speech community.
Despite remarkable performance on academic benchmarks, SSL powered technologies e.g. speech and speaker recognition, speech synthesis and many others may exhibit highly unpredictable results once applied to realistic scenarios. This can translate into a global accuracy drop due to a lack of robustness to adversarial acoustic conditions, or biased and discriminatory behaviors with respect to different pools of end users. Documenting and facilitating the control of such aspects prior to the deployment of SSL models into the real-life is necessary for the industrial market. To evaluate such aspects, within the project, we will create novel robustness regularization and debasing techniques along two axes: 1. debasing and regularizing speech representations at the SSL level; 2. debasing and regularizing downstream-adapted models (e.g. using a pre-trained model).
To ensure the creation of fair and robust SSL pre-trained models, we propose to act both at the optimization and data levels following some of our previous work on adversarial protected attribute disentanglement and the NLP literature on data sampling and augmentation [Noé2021]. Here, we wish to extend this technique to more complex SSL architectures and more realistic conditions by increasing the disentanglement complexity i.e. the sex attribute studied in [Noé2021] is particularly discriminatory. Then, and to benefit from the expert knowledge induced by the scope of the task of interest, we will build on a recent introduction of task-dependent counterfactual equal odds criteria [Sari2021] to minimize the downstream performance gap observed in between different individuals of certain protected attributes and to maximize the overall accuracy. Following this multi-objective optimization scheme, we will then inject further identified constraints as inspired by previous NLP work [Zhao2017]. Intuitively, constraints are injected so the predictions are calibrated towards a desired distribution i.e. unbiased.
SKILLS
SCIENTIFIC ENVIRONMENT
The thesis will be conducted within the Getalp teams of the LIG laboratory (https://lig-getalp.imag.fr/) and the LIA laboratory (https://lia.univ-avignon.fr/). The GETALP team and the LIA have a strong expertise and track record in Natural Language Processing and speech processing. The recruited person will be welcomed within the teams which offer a stimulating, multinational and pleasant working environment. The means to carry out the PhD will be provided both in terms of missions in France and abroad and in terms of equipment. The candidate will have access to the cluster of GPUs of both the LIG and LIA. Furthermore, access to the National supercomputer Jean-Zay will enable to run large scale experiments. The PhD position will be co-supervised by Mickael Rouvier (LIA, Avignon) and Benjamin Lecouteux and François Portet (Université Grenoble Alpes). Joint meetings are planned on a regular basis and the student is expected to spend time in both places. Moreover, the PhD student will collaborate with several team members involved in the project in particular the two other PhD candidates who will be recruited and the partners from LIA, LIG and Dauphine Université PSL, Paris. Furthermore, the project will involve one of the founders of SpeechBrain, Titouan Parcollet with whom the candidate will interact closely.
INSTRUCTIONS FOR APPLYING Applications must contain: CV + letter/message of motivation + master notes + be ready to provide letter(s) of recommendation; and be addressed to Mickael Rouvier (mickael.rouvier@univ-avignon.fr), Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr) and François Portet (francois.Portet@imag.fr). We celebrate diversity and are committed to creating an inclusive environment for all employees.
REFERENCES: [Noé2021] Noé, P.- G., Mohammadamini, M., Matrouf, D., Parcollet, T., Nautsch, A. & Bonastre, J.- F. Adversarial Disentanglement of Speaker Representation for Attribute-Driven Privacy Preservation in Proc. Interspeech 2021 (2021), 1902–1906. [Sari2021] Sarı, L., Hasegawa-Johnson, M. & Yoo, C. D. Counterfactually Fair Automatic Speech Recognition. IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 3515–3525 (2021) [Yue2021] Yue, X. & Li, H. Phonetically Motivated Self-Supervised Speech Representation Learning in Proc. Interspeech 2021 (2021), 746–750. [Zaiem2022] Zaiem, S., Parcollet, T. & Essid, S. Pretext Tasks Selection for Multitask Self-Supervised Speech Representation in AAAI, The 2nd Workshop on Self-supervised Learning for Audio and Speech Processing, 2023 (2022). [Zhao2017] Zhao, J., Wang, T., Yatskar, M., Ordonez, V. & Chang, K. - W. Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints in Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing (2017), 2979–2989.
| |||
6-44 | (2022-07-21) Research Opportunity at INESC TEC / LIAAD, Porto, Portugal Research Opportunity at INESC TEC / LIAAD, Porto, Portugal Funded PhD position, fees covered during the period of the grant
Application deadline 02-August-2022
|