ISCApad #287 |
Monday, May 09, 2022 by Chris Wellekens |
6-1 | (2021-12-18) Master or PhD internships Hi, You are in a Master or PhD program (in NLP or Speech proc.) and want to do an internship in 2022 co-supervised by
and
?
This offer is for you ! https://tinyurl.com/intern-nle-sb (You can apply online from the Web link)
—————
Joint ASR and Repunctuation for Better Machine and Human Readable Transcripts
| |||
6-2 | (2021-12-19) 2 research engineer positions ALAIA, IRIT, Toulouse, France Afin de renforcer son équipe, le laboratoire Commun ALAIA, destiné à l'Apprentissage des langues Assisté par Intelligence Artificielle, propose deux postes d'ingénieurs de recherche (12 mois). ALAIA est centré sur l'expression et la compréhension orale d'une langue étrangère cible (L2). En collaboration avec ses deux partenaires, académique (IRIT) et industriel (Archean Technologie) ainsi que des experts en didactiques de langues, les missions consisteront à concevoir, développer et intégrer des services innovants basés sur l'analyse des productions des apprenants L2, la détection et la caractérisation d'erreurs allant du niveau phonétique au niveau linguistique. Elles seront affinées en fonction du profil des personnes recrutées. Les compétences attendues portent sur le traitement automatique de la parole et du langage ainsi que les méthodes de machine learning.
Les candidatures sont à adresser à Isabelle Ferrané (isabelle.ferrane@irit.fr) et Lionel Fontan (lfontan@archean.tech). N'hésitez pas à nous contacter pour de plus amples informations.
| |||
6-3 | (2021-12-26) Research associate and postdoc at Heriot-Watt University, Edinburgh, UK 1) Research Associate in Safe Conversational AI(re-advertising) Closing date: 9th January 2022
?
We seek a candidate with experience in neural approaches to natural language generation, or closely related fields, including Vision + Language tasks.
Applicants interested in social computing tasks, such as online abuse detection and mitigation, as well as interdisciplinary candidates with a wider interest in ethical and social implications of NLP are also encouraged to apply.
The opportunity:
This is an exciting opportunity to work with a team developing safer AI methods bringing together AI researchers and researchers working on formal verification methods to researchers working on computational law. You will contribute your insight and experience into researching and developing deep learning methods for Conversational AI and closely related areas.
The project is led by Heriot-Watt University and in cooperation with the University of Edinburgh and Strathclyde, see https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/T026952/1
2) Postdoctoral Research Assistant in children's perceptions of technology
Closing date: 6th January 2022
?
We are looking for a creative and self-motivated researcher to investigate children?s knowledge and perceptions of conversational agents such as Alexa. The position is located at the University of Edinburgh's Moray House School of Education.
The opportunity:
This is an exciting opportunity to work with an interdisciplinary team of computer scientists and social psychologists at three Scottish universities on a project to address gender bias in conversational agents. You will contribute your insight and experience into researching technology with and for children to the team.
The project is led by Heriot-Watt University and in cooperation with the University of Edinburgh and Strathclyde, see https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/T024771/1
For any enquiries, please get in touch!
Prof. Verena Rieser
Heriot-Watt University, Edinburgh
https://sites.google.com/site/verenateresarieser/
| |||
6-4 | (2022-01-10) Conference coordinator, ISCA. The International Speech Communication (ISCA) [www.isca-speech.org) seeks application for a :
Conference Coordinator (f/m/d)
For a limited-term contract with 18h/week, with the perspective of extension towards an unlimited-term contract.
The International Speech Communication Association (ISCA) is a scientific non-profit organization according to the French law 1901. The purpose of the association is to promote, in an international world-wide context, activities and exchanges in all fields related to speech communication science and technology. The association is aimed at all persons and institutions interested in fundamental research and technological development that aims at describing, explaining and reproducing the various aspects of human communication by speech, e.g. phonetics, linguistics, computer speech recognition and synthesis, speech compression, speaker recognition, aids to medical diagnosis of voice pathologies.
One of the core activities of ISCA is to ensure the continuous organization of its flagship conference, Interspeech. The conference is organized each year in a different country by a different team; it typically attracts 1500 or more participants from all over the world. The role of this newly-created position of conference coordinator is to ensure a smooth organization of the conference over the years, according to well-established standards, and taking into account the aims ISCA has with the conference.
The role requires – amongst others – to take over the lead for the following activities:
Required competences:
We are looking for a self-motivated person who is enthusiastic about the organization of international scientific events, and has excellent organizational and communication skills (mostly in English). The person does not need to have a scientific background in speech communication and technology, but should be able to understand the scientific background, as well as the aims ISCA has with the organization of Interspeech conferences. A proven expertise in the organization of large-scale events is a must, and of scientific events is a plus.
The job can be carried out remotely from any location. A flexible allocation of time over the year is required, depending on the status of preparations (the conference is typically organized in September, and there is an expected increase in activity from March to September). The willingness to physically attend preparatory meetings and the conference is required.
Deadline : 15 March 2022.
| |||
6-5 | (2022-01-28) ASSISTANT OR ASSOCIATE PROFESSOR IN SPEECH AND LANGUAGE TECHNOLOGY (tenure track) at Aalto University, Finland Aalto opens a call for an assistant or associate professor in speech and language technology.
| |||
6-6 | (2022-02-01) Two post-docs at ADAPT, Dublin, Irland
| |||
6-7 | (2022-02-02) Professeur(-e), Sorbonne Université, Paris, France Un poste de Professeure / Professeur des Universités en Intelligence artificielle : théorie et applications, est à pourvoir à Sorbonne Université avec une affection recherche dans un des laboratoires : ISIR, LIB, LIMICS ou LIP6.
Professeure / Professeur des Universités
Section 27 – Informatique Profil : Intelligence artificielle : théorie et applications Date limite des candidatures au poste : le 04 mars 2022 à 16h La personne recrutée contribuera significativement aux enseignements de Licence d’informatique dont les besoins couvrent l’ensemble de la discipline (algorithmique, programmation notamment objet, concurrente, fonctionnelle, web, mathématiques discrètes, structures de données, système, architecture, réseaux, compilation, bases de données, etc.) ainsi qu’au master d’informatique, en particulier pour les parcours ANDROIDE, BIM ou DAC. Recherche : Le poste est ouvert à tous les domaines de l’IA et de ses applications. La personne retenue intégrera l’un des laboratoires : ISIR, LIB, LIMICS ou LIP6 selon ses thématiques de recherche, et/ou de projets impliquant plusieurs laboratoires d’accueil au sein de SCAI (Sorbonne Center for Artificial Intelligence). La professeure ou le professeur devra être capable de coordonner des programmes collaboratifs nationaux et internationaux. La participation de la candidate ou du candidat, dans le passé, à des projets multidisciplinaires sera appréciée. Lien vers la fiche de poste : https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/ListesPostesPublies/FIDIS/0755890V/FOPC_0755890V_391.pdf Lien vers le site de recrutement : https://recrutement.sorbonne-universite.fr/fr/personnels-enseignants-chercheurs-enseignants-chercheurs/enseignants-chercheurs/recrutement-2022-des-enseignantes-chercheuses-et-enseignants-chercheurs.html
Contact à l’ISIR : Guillaume MOREL, directeur de l’ISIR : guillaume.morel(at)sorbonne-universite.fr En vous remerciant d’avance pour votre aide dans le partage de cette offre.
Bien cordialement,
| |||
6-8 | (2022-02-11) 2 research fellowship grant for collaboration in research activities, Kore University of Enna - Enna (Sicily), Italy A public selection procedure is called, based on qualifications and interview, for the assignment of n. 2 research fellowship grant for collaboration in research activities
Project main aim: Multidisciplinary Research on AI for Health. Location: Kore University of Enna - Enna (Sicily), Italy Funding Programme: Research Projects of National Relevance - PRIN 2017
Description: The project focuses on idiopathic Parkinson’s disease dysarthric speech, produced by speakers of two varieties of Italian that show different segmental (consonantal, vocalic) and prosodic characteristics. The project as a whole aims at: identifying phonetic features that impact on speech intelligibility and accuracy, separating variability due to dysarthria from features due to sociolinguistic variation, and developing perspectives and tools for clinical practice that take variation into account.
Duration: 12 months
Link to apply: https://unikore.it/index.php/it/contratti-di-ricerca/item/41282-d-p-n-33-2022-2-assegni-di-ricerca-presso-l-universita-degli-studi-di-enna-kore
Further information contact Prof. Sabato Marco Siniscalchi, E-mail: marco.siniscalchi-at-unikore.it
| |||
6-9 | (2022-02-08) PhD position at Delft University, The NetherlandsJob descriptionOne of the most pressing matters that holds back robots from taking on more tasks and reach a widespread deployment in society is their limited ability to understand human communication and take situation-appropriate actions. This PhD positions is dedicated to addressing this gap by developing the underlying data-driven models that enable a robot to engage with humans in a socially aware manner. This position is specifically targeted at the development of an argumentative dialogue system for human-robot interaction. The PhD candidate will explore how to fuse multimodal behaviour to infer a person's perspective. The candidate will use, and further develop, reinforcement learning techniques in order to drive the robot's argumentative strategy for deliberating topics of current social importance such as global warming or vaccination. The ideal candidate will have a keen interest in speech technology and reinforcement learning. He or she has strong interactive system background will design and run the experiments to evaluate the created hybrid-AI models through human-robot interaction. Topics of interest: 1) long-term human-robot interaction 2) affective computing 2) NLP&argument-mining Requirements
Click here to apply:
| |||
6-10 | (2022-02-17) Postdoctoral position at INRIA, Bordeaux, France Postdoctoral position in Speech Processing at INRIA, Bordeaux, France
Title: Glottal source inverse filtering for the analysis and classification of pathological speech Keywords: Pathological speech processing, Glottal source estimation, Inverse filtering, Machine learning, Parkinsonian disorders, Respiratory diseases Contact and Supervisor: Khalid Daoudi (khalid.daoudi@inria.fr) INRIA team: GEOSTAT (geostat.bordeaux.inria.fr) Duration: 13 months (could be extended) Starting date: between 01/04/2022 and 01/06/2022 (depending on the candidate availability) Application : via https://recrutement.inria.fr/public/classic/en/offres/2022-04481 Salary: 2653€/month (before taxes, net salary 2132€) Profile: PhD thesis in signal/speech processing (or a solid post-thesis experience in the field) Required Knowledge and background: A solid knowledge in speech/signal processing; Basics of machine learning; Programming in Matlab and Python. Scientific research context During this century, there has been an ever increasing interest in the development of objective vocal biomarkers to assist in diagnosis and monitoring of neurodegenerative diseases and, recently, respiratory diseases because of the Covid-19 pandemic. The literature is now relatively rich in methods for objective analysis of dysarthria, a class of motor speech disorders [1], where most of the effort has been made on speech impaired by Parkinson’s disease. However, relatively few studies have addressed the challenging problem of discrimination between subgroups of Parkinsonian disorders which share similar clinical symptoms, particularly is early disease stages [2]. As for the analysis of speech impaired by respiratory diseases, the field is relatively new (with existing developments in very specialized areas) but is taking a great attention since the beginning of the pandemic. The speech production mechanism is essentially governed by five subsystems: respiratory, phonatory, articulatory, nasalic and prosodic. In the framework of pathological speech, the phonatory subsystem is the most studied one, usually using sustained phonation (prolonged vowels). Phonatory measurements are generally based on perturbations or/and cepstral features. Though these features are widely used and accepted, they are limited by the fact that the produced speech can be a product of some or all the other subsystems. The latter thus all contribute to the phonatory performance. An appealing way to bi-pass this problem is to try to extract the glottal source from speech in order to isolate the phonatory contribution. This framework is known as glottal source inverse filtering (GSIF) [3]. The primary objective of this proposal is to investigate GSIF methods in pathological speech impaired by dysarthria and respiratory deficit. The second objective is to use the resulting glottal parameterizations as inputs to basic machine learning algorithms in order to assist in the discrimination between subgroups of Parkinsonian disorders (Parkinson’s disease, Multiple-System Atrophy, Progressive Supranuclear Palsy) and in the monitoring of respiratory diseases (Covid-19, Asthma, COPD). Both objectives benefit from a rich dataset of speech and other biosignals recently collected in the framework of two clinical studies in partnership with university hospitals in Bordeaux and Toulouse (for Parkinsonian disorders) and in Paris (for respiratory diseases). Work description GSIF consists in building a model to filter out the effect of the vocal tract and lips radiation from the recorded speech signal. This difficult problem, even in the case of healthy speech, becomes more challenging in the case of pathological speech. We will first investigate time-domain methods for the parameterization of the glottal excitation using glottal opening and closure instants. This implies the development of a robust technique to estimate these critical time-instants from dysarthric speech. We will then explore the alternative approach of learning a parametric model of the entire glottal flow. Finally, we will investigate frequency-domain methods to determine relationships between different spectral measures and the glottal source. These algorithmic developments will be evaluated and validated using a rich set of biosignals obtained from patients with Parkinsonian disorders and from healthy controls. The biosignals are electroglottography and aerodynamic measurements of oral and nasal airflow as well as intra-oral and sub-glottic pressure. After dysarthric speech GIFS analysis, we will study the adaptation/generalization to speech impaired by respiratory deficits. The developments will be evaluated using manual annotations, by an expert phonetician, of speech signals obtained from patients with respiratory deficit and from healthy controls. The second aspect of the work consists in manipulating machine learning algorithms (LDA, logistic regression, decision trees, SVM…) using standard tools (such as Scikit-Learn). The goal here will be to study the discriminative power of the resulting speech features/measures and their complementarity with other features related to different speech subsystems. The ultimate goal being to conceive robust algorithms to assist, first, in the discrimination between Parkinsonian disorders and, second, in the monitoring of respiratory deficit. Work synergy - The postdoc will interact closely with an engineer who is developing an open-source software architecture dedicated to pathological speech processing. The validated algorithms will be implemented in this architecture by the engineer, under the co-supervision of the postdoc. - Giving the multidisciplinary nature of the proposal, the postdoc will interact with the clinicians participating in the two clinical studies. References: [1] J. Duffy. Motor Speech Disorders Substrates, Differential Diagnosis, and Management. Elsevier, 2013. [2] J. Rusz et al. Speech disorders reflect differing pathophysiology in Parkinson's disease, progressive supranuclear palsy and multiple system atrophy. Journal of Neurology, 262(4), 2015. [3] P. Alku. Glottal inverse filtering analysis of human voice production – A review of estimation and parameterization methods of the glottal excitation and their applications. Sadhana – Academy Proceedings in Engineering Sciences. Vol. 36, Part 5, pp. 623-650, 2011
| |||
6-11 | (2022-02-15) Thèse CIFRE L3i La Rochelle-EasyChain
| |||
6-12 | (2022-04-05) Junior professor position at Université du Mans, France L?Université du Mans ouvre une Chaire Professeur Junior en traitement du langage multimodal.
Les candidatures sont ouvertes sur Galaxie à déposer avant le 2 mai 2022.
Projet de recherche / Description of the research projectL?objectif principal est de développer une IA de traitement du langage multimodale et multilingue qui repose sur un espace de représentation commun pour les modalités parole et texte dans différentes langues. Le ou la candidat.e devra développer ses activités de recherche afin de renforcer le caractère transverse de ces représentations à travers une combinaison pertinente de modalités (par ex. : vidéo et texte ou texte et parole), de tâches (par ex. : caractérisation du locuteur et synthèse de la parole, compréhension de la parole et traduction automatique, reconnaissance de la parole et synthèse de résumé?automatique) et de langues. Ses travaux de recherche tendront à?développer des systèmes automatiques intégrant l?humain au c?ur du traitement en utilisant des approches d?apprentissage actif et en explorant les problématiques d?expliquabilité?et d?interpretabilité afin de permettre à l?utilisateur naïf d?enseigner au système automatique ou d?en extraire des éléments compréhensibles. Ce projet visera également le renforcement de collaborations existantes (Facebook, Orange, Airbus) ou la création de nouveaux partenariats (Oracle, HuggingFace?). The research project should take place in the LST team goal that aims at developping a multimodal and multilingual representation space for speech and text modalities. The Junior Professor is expected to develop his/her own research diretions between the topics already existing in the LST team and to develop hybrid approaches by mixing for instance speaker characterization and speech synthesis or speech translation and speech understanding. He/She should also integrate the strategy of the team to involve the human in the loop for deep learning systems and work towards a better explainability/interpretability of speech processing algorithms. Projet d'enseignement / Description of the teaching projectLe ou la candidat.e intégrera l?équipe pédagogique du Master en intelligence artificielle du département d?informatique de l?UFR Sciences et Techniques de l?Université? du Mans. Son implication aura pour but de renforcer les compétences en apprentissage profond (apprentissage auto-supervisé, GANs, Transfomer, méthodologies et protocoles pour l?IA?) mais également dans les infrastructures dédiées à l?apprentissage automatique et aux sciences des données (calcul distribué, SLURM, MPI), l?utilisation d?un cluster de calcul (ssh, temux, jupyter-lab, conda) ou le cloud computing. Fort de compétences reconnues en traitement automatique du langage et de la parole l?équipe pédagogique souhaite élargir son offre de formation en adaptant les contenus à d?autres types de données (images, séquences temporelles générées par différents types de capteurs, graphes?) afin de répondre aux besoins spécifiques du tissu industriel local et régional en apprentissage automatique. Cette action s?inscrira dans la volonté de l?équipe pédagogique de développer l?apprentissage et la formation continue en lien avec les partenaires industriels mais également à destination d?un public académique de chercheurs et enseignant chercheurs non-informaticiens souhaitant développer des compétences en apprentissage automatique. Teaching activities will take place within the Master of Computer Sciences and Artificial Intelligence from Le Mans University. The candidate is expected to strengthen the teaching on deep learning (self-supervised training, GANs, Transformers, machine learning methodology and protocols?) but also teach tools for distributed learning (SLURM, MPI, ssh, temux, jupyter-lab, conda?) and cloud computing. In mid terms, the candidate will contribute to the development of a continuing learning in artificial intelligence adapted to the need of local companies and industry but also for researchers non-specialist in computer sciences. Conditions de candidature / Application requirementsêtre titulaire d'un doctorat / hold a PhD Pour les candidats exerçant ou ayant cessé d'exercer depuis moins de dix-huit mois une fonction l'enseignant-chercheur, d'un niveau équivalent à celui de l'emploi à pourvoir, dans un établissement d'enseignement supérieur d'un État autre que la France: titres, travaux et tout élément permettant d'apprécier le niveau de fonction permettant d'accorder une dispense de doctorat. For candidates exercising or having ceased to exercise for less than eighteen months a function of teacher-researcher, of a level equivalent to that of the position to be filled, in a higher education establishment of a State other than France: titles, works and any element allowing to appreciate the level of function allowing to grant a dispence of doctorate. ContactAntoine LAURENT Antoine.laurent@univ-lemans.fr Anthony LARCHER Anthony.larcher@univ-lemans.fr
| |||
6-13 | (2022-03-17) un poste de doctorant(e) dans le domaine de la détection multimodale de deep fakes, IRISA, Lannion, France L?équipe EXPRESSION de l?IRISA lance un appel à candidatures pour un poste de doctorant ou doctorante dans le domaine de la détection multimodale de deep fakes.
Les détails de l'offre sont disponibles à cette adresse : MUDEEFA - MUltimodal DeEEp Fake detection using Text-To-Speech Synthesis, Voice Conversion and Lips Reading Le poste nécessite d'être titulaire d'un master en informatique ou d'un diplôme d'ingénieur donnant le titre de master en informatique. La thèse se déroulera à Lannion, dans les Côtes d?Armor, au sein de l?équipe EXPRESSION. Merci d'envoyer un CV détaillé, une lettre de motivation, une ou plusieurs lettres de référence et les résultats académiques du diplôme précédent (Master ou Ingénieur donnant le titre de Master) à tous les contacts indiqués dans le sujet avant le vendredi 8 avril 2022, limite stricte. Bien cordialement Arnaud Delhay-Lorrain
Arnaud Delhay-Lorrain - Associate Professor
IRISA - Université de Rennes 1
IUT de Lannion - Département Informatique
Rue Edouard Branly - BP 30219
F-22 302 LANNION Cedex
| |||
6-14 | (2022-03-18) 3 speech-to-speech translation positions available at Meta/Facebook FAIR We are seeking research scientists, research engineers and postdoctoral researchers with expertise on speech translation and related fields to join our team.
FAIR?s mission is to advance the state-of-the-art in artificial intelligence through open research for the benefit of all. As part of this mission, our goal is to provide real-time, natural-sounding translations at near-human level quality. The technology we develop will enable multilingual live communication. We aim for our technology to be inclusive: it should support both written and unwritten languages. Finally, in order to preserve the authenticity of the original content, especially for more creativity related content, we aim to preserve non-lexical elements in the generated audio translations. Ideal candidates will have expertise on speech translation or related fields such as speech recognition, machine translation or speech synthesis. Please send email to juancarabina@fb.com with a CV if you are interested in applying.
| |||
6-15 | (2022-03-21) Thèse ou postdoc at Laboratoire d'Informatique de Grenoble, France Sujet de thèse ou de postdoctorat dans le cadre du projet Popcorn (projet collaboratif avec deux entreprises)
encadrée par Benjamin Lecouteux, Gilles Sérasset et Didier Schwab (Laboratoire d?Informatique de Grenoble, Groupe d?Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)
Titre : Peuplement OPérationnel de bases de COnnaissances et Réseaux Neuronaux
Le projet aborde le problème de l?enrichissement semi-automatisé d?une base de connaissance au travers de l?analyse automatique de textes. Afin d?obtenir une innovation de rupture dans le domaine du Traitement Automatique du Langage Naturel (TALN) pour les clients sécurité et défense, le projet se focalise sur le traitement du français (même si les approches retenues seront par la suite généralisables à d?autres langues). Les travaux aborderont différents aspects :
? L?annotation automatique de documents textuels par la détection de mentions d?entités présentes dans la base de connaissance et leurs désambiguïsation sémantique (polysémie, homonymie) ;
? La découverte de nouvelles entités (personnes, organisations, équipements, événements, lieux), de leurs attributs (âge d?une personne, numéro de référence d?un équipement, etc.), et des relations entre entités (une personne travaille pour une organisation, des personnes impliquées dans un événement, ...). Une attention particulière sera donnée au fait de pouvoir s?adapter souplement à des évolutions de l?ontologie, la prise en compte de la place de l?utilisateur et de l?analyste pour la validation/capitalisation des extractions effectuées.
Le projet se focalise autour des trois axes de recherches suivants :
? Génération de données synthétiques textuelles à partir de textes de référence ;
? La reconnaissance des entités d?intérêt, des attributs associés et des relations entre les entités.
? La désambiguisation sémantique des entités (en cas d?homonymie par exemple)
Profil recherché:
- Solide expérience en programmation & machine learning pour le Traitement Automatique de Langues (TAL), notamment l?apprentissage profond
- Master/Doctorat Machine Learning ou informatique, une composante TAL ou linguistique computationnelle sera un plus apprécié
- Bonne connaissance du français
Détails pratiques:
- Début de la thèse rentrée 2022
- Contrat doctoral à temps plein au LIG (équipe Getalp) pour 3 ans (salaire: min 1768? brut mensuel)
- ou Contrat postdoctoral à temps plein au LIG (équipe Getalp) pour 20 mois (salaire: min 2395? brut mensuel)
Environnement scientifique:
Comment postuler ?
Les candidatures doivent contenir : CV + lettre/message de motivation + notes de master + lettre(s) de recommandations; et être adressées à Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr), Gilles Sérasset (gilles.serasset@univ-grenoble-alpes.fr) et Didier Schwab (Didier.Schwab@univ-grenoble-alpes.fr)
| |||
6-16 | (2022-04-04) PhD position at INRIA-LORIA, Nancy, France 2022-04676 - PhD Position F/M Nongaussian models for deep learning based audio signal processing Level of qualifications required :Graduate degree or equivalent Fonction : PhD Position Context The PhD student will join the Multispeech team of Inria,that is the largest French research group in the field of speech processing. He/she will benefit from the research environment and the expertise in audio signal processing and machine learning of the team, which includes many researchers, PhD students, post-docs, and software engineers working in this field. He/she will be supervised by Emmanuel Vincent (Senior Researcher, Inria), and Paul Magron (Researcher, Inria). Assignment Audio signal processing and machine listeningsystems have achieved considerable progress over the past years, notably thanks to the advent of deep learning. Such systems usually process a timefrequency representation of the data, such as a magnitude spectrogram, and model its structure using a deep neural network (DNN). Generally speaking, these systems implicitly rely on the local Gaussian model [1],that is an elementary statistical model for the data. Even though it is convenient to manipulate, this model builds upon several hypotheses which are limiting in practice: (i) circular symmetry, which boils down t o discarding the phase information (= the argument of the complex-valued time-frequency coefficients); (ii) independence of the coefficients, which ignores the inherent structure of audio signals (temporal dynamics, frequency dependencies); and (iii)Gaussian density, which is not observed in practice. Statistical audio signal modeling is an active research field. However, recent advances in this field are usually not leveraged in deep learning-based approaches, thus their potential is currently underexploited. Besides, some of these advances are not mature enough to be fully deployed yet. Therefore, the objective of this PhD is to design advanced statistical signal models for audio which overcome the limitations of the local Gaussian model, while combining them with DNN-based spectrogram modeling. The developed approaches will be applied to audio source separation and speech enhancement. Main activities The main objectives of the PhD student will be: 1. To develop structured statistical models for audio signals, which alleviate the limitations of the local Gaussian model. In particular, t he PhD student will focus on designing models by leveraging properties that originate from signal analysis, such as the temporal continuity [2] or the consistency of the representation [3], in order to favor interpretability and meaningfulness of the models. For instance, alpha-stable distributions have been exploited in audio for their robustness [4]. Anisotropic models are an interesting research direction since they overcome the circular symmetry assumption, while enabling an interpretable parametrization of the statistical moments [5]. Finally, a careful design of the covariance matrix allows for explicitly incorporating time and frequency dependencies [6]. 2. To combine these statistical models withDNNs. This raises several technical difficulties regarding the design of, e.g., the neural architecture, the loss function, and the inference algorithm. The student will exploit and adapt the formalism developed in Bayesian deep learning, notably the variational autoencoding framework [7], as well as the inference procedures developed in DNN-free nongaussian models [8]. 3. To validate experimentally these methods on realistic sound datasets. To that end, the PhD student will use public datasets such as LibriMix (speech) and MUSDB (music), which are reference datasets for source separation and speech enhancement. The PhD student will disseminate his/her research results in international peer-reviewed journals and conferences. In order to promote reproducible research, these publications will be self-archived at each step of the publication lifecycle, and accessible through open access repositories (e.g., arXiv, HAL). The code will be integrated to Asteroid, that is the reference soDware for source separation and speech enhancement developed by Multispeech. Bibliography [1] E. Vincent, M. Jafari, S. Abdallah, M. Plumbley, M. Davies,Probabilistic modeling paradigms for audio source separation, Machine Audition: Principles, Algorithms and Systems, p.162–185, 2010. [2] T. Virtanen, Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 15, no. 3, pp.1066-1074, 2007. [3] J. Le Roux, N. Ono, S. Sagayama, Explicit consistency constraints for STFT spectrograms and their application to phase reconstruction, Proc. SAPA, 2008. [4] S. Leglaive, U. Şimşekli, A. Liutkus, R. Badeau and G. Richard,Alpha-stable multichannel audio source separation, Proc. IEEE ICASSP, 2017. [5] P. Magron, R. Badeau, B. David, Phase-dependent anisotropic Gaussian model for audio source separation, Proc. IEEE ICASSP, 2017. [6] M. Pariente, Implicit and explicit phase modeling in deep learning-based source separation, PhD thesis - Université de Lorraine, 2021. [7] L. Girin, S. Leglaive, X. Bie,J. Diard, T. Hueber, X. Alameda-Pineda,Dynamical variational autoencoders: A comprehensive review, Foundations and Trends in Machine Learning, vol. 15, no. 1-2, 2021. General Information Theme/Domain : Language, Speech and Audio Town/city : Villers lès Nancy Inria Center : CRI Nancy - Grand Est Starting date : 2022-10-01 Duration of contract : 3 years Deadline to apply : 2022-05-02 Contacts Inria Team : MULTISPEECH PhD Supervisor : Magron Paul / paul.magron@inria.fr About Inria Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, oDen at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact. The keys to success Upload your complete application data. Applications will be assessed on a rolling basis, thus it is advised to apply as soon as possible. Instruction to apply Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment. Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities. Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed. [8] P. Magron, T. Virtanen, Complex ISNMF: a phase-aware model for monaural audio source separation, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 27, no. 1, pp. 20-31, 2019. Skills Master or engineering degree in computer science, data science, signal processing, or machine learning. Professional capacity in English (spoken, read, and written). Some programming experience in Python andin somedeep learning framework (e.g., PyTorch). Previous experience and/or interest for speech and audio processing is a plus. Benefits package Restauration subventionnée Transports publics remboursés partiellement Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement) Possibilité de télétravail (après 6 mois d'ancienneté) et aménagement du temps de travail Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.) Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria) Accès à la formation professionnelle Sécurité sociale Remuneration Salary: 1982€ gross/month for 1st and 2 year. 2085€ gross/month for 3rd year. Monthly salary after taxes : around 1594€ for 1st and 2 year. 1677€ for 3rd year
| |||
6-17 | (2022-04-06) Ph.D. Thesis position and Post-doc position at Loria-INRIA, Nancy, France Ph.D. Thesis position and Post-doc position at Loria-INRIA, Mutlispeech team, Nancy (France)
Multimodal automatic hate speech detection
https://jobs.inria.fr/public/classic/fr/offres/2022-04660
https://team.inria.fr/multispeech/fr/category/job-offers/
Hate speech expresses antisocial behavior. In many countries, online hate
speech is punishable by the law. Manual analysis of such content and its moderation
are impossible. An effective solution to this problem would be the automatic
detection of hate comments. Until now, for hate speech detection, only text
documents have been used. We would like to advance the knowledge about hate speech
detection by exploring a new type of document: audio documents.
We would like to develop a new methodology to automatically detect hate speech,
based on Machine Learning and Deep Neural Networks using text and audio.
Required Skills: The candidate should have theoretical and a moderate practical experience
with Deep Learning, including a good practice in Python and an understanding of deep
learning libraries like Pytorch. The knowledge of NLP or signal processing will be helpful.
Supervisors:
Irina Illina, Associate Professor, HDR, Lorrain University
Dominique Fohr, Senior Researcher, CNRS
https://members.loria.fr/IIllina/ illina@loria.fr
https://members.loria.fr/DFohr/ dominique.fohr@loria.fr
MULTISPEECH is a joint research team between the Université of Lorraine, Inria,
and CNRS. It is part of department D4 “Natural language and knowledge processing”
of LORIA. Its research focuses on speech processing, with particular emphasis to
multisource (source separation, robust speech recognition), multilingual (computer
assisted language learning), and multimodal aspects.
| |||
6-18 | (2022-04-07) Postes de traducteur peul-français (H/F), ELDA, Paris, France
| |||
6-19 | (2022-04-07) Postes de transcripteur peul-français (H/F), ELDA, Paris, France Contexte
| |||
6-20 | (2022-04-07) Contrat doctoral au Collegium Musicæ de Sorbonne Université, Paris, France Le Collegium Musicæ de Sorbonne Université propose un contrat doctoral sur le style vocal : Analyse par la Synthèse performative du style vocal - Porteurs de projet : Christophe d'Alessandro et Céline Chabot-Canet Le propos de cette thèse est l’étude du style vocal par le paradigme d’analyse par la synthèse des détails se trouvent ici (aller sur l'onglet Collegium Musicæ de la page) : https://www.sorbonne-universite.fr/projets-proposes-en-2022-programme-instituts-et-initiatives Contact: Christophe d'Alessandro : christophe.dalessandro@sorbonne-universite.fr
| |||
6-21 | (2022-04-08) Postdocs at IMT Atlantique, Brest, France L'équipe RAMBO de IMT Atlantique, en collaboration avec Smart Macadam, recherche des candidats pour deux postdocs de 18-24 mois basés à Nantes sur les sujets suivants: automatique-de-la-parole-intelligence-artificielle-cdd-18-mois intelligence-artificielle-cdd-18-mois Mihai ANDRIES
Enseignant-chercheur Équipe RAMBO IMT Atlantique Brest, France
| |||
6-22 | (2022-04-10) PhD thesis positions, LaBRI, Bordeaux, France Vocal biomarkers collected through conversational agents for diagnosis assistance and follow-up of sleep and mental disorders https://emploi.cnrs.fr/Offres/Doctorant/UMR5800-MAGHIN-017/Default.aspx The SANPSY and the Labri teams have demonstrated their ability to identify new vocal biomarkers to measure excessive daytime sleepiness both subjectively and objectively in patients suffering from sleep disorders [1]–[5]. SANPSY demonstrated the validity of autonomous numeric solutions (I.e. smartphone based virtual agents) to diagnose sleep/mental disorders in the general population [6]– [10]. We plan now to develop new virtual agents collecting biomarkers (i.e. from speech) in our healthy subjects and patients cohorts for diagnostic, treatment and follow-up (ADDICTAQUI, KANOPEE and AUTONOMHEALTH(PEPR) projects). The PhD thesis project “Vocal biomarkers collected through conversational agents for diagnosis assistance and follow-up of sleep and mental disorders“ relies on 4 stages: 1) developing new virtual agents to collect vocal markers: The objective is to design new scenarios targeting behavioral interventions to improve fatigue, mood and excessive daytime sleepiness. Moreover, the scenarios will be designed so that the agent interacts with the subject in order to engage a discussion (spontaneous speech). This will lead to more ecological conditions that should increase the acceptability. 2) switching from high-quality controlled recordings made at the hospital to in-the-field unsupervised recordings using smartphones. Our current vocal biomarkers are defined using a reading task and using high-quality microphones. The new interaction scenarios from task 1) will lead us to record spontaneous speech with smartphone microphones. This task will tackle the differences in recording conditions and their impact on our feature extraction pipeline. 3) verifying the relevance of the existing vocal markers when used with the new data and propose new features that could be used as high-level biomarkers such as lexical, syntactic and semantic cues. Our features will have to be adapted to consider the versatile nature of spontaneous discourse which is a completely different speaking style from read speech. Spontaneous speech will however provide additional cues that could be used as high-level biomarkers such as lexical, syntactic and semantic markers. 4) studying the sensitivity and specificity of the selected biomarkers on diagnostic and follow up of symptoms and disorders with respect to other medical measures. This final part of the PhD project will be addressed jointly by LaBRI and SANPSY and includes the clinical validation of the proposed approaches. References: [1] V. P. Martin, G. Chapouthier, M. Rieant, J.-L. Rouas, and P. Philip, ‘Using reading mistakes as features for sleepiness detection in speech’, in 10th international conference on speech prosody 2020, Tokyo, Japan, May 2020, pp. 985–989. [Online]. Available: https://hal.archives- ouvertes.fr/hal-02495149 [2] V. P. Martin, J.-L. Rouas, J.-A. Micoulaud-Franchi, and P. Philip, ‘The objective and subjective sleepiness voice corpora’, in 12th edition of its language resources and evaluation conference., Marseille, France, May 2020, pp. 6525–6533. [Online]. Available: https://hal.archives- ouvertes.fr/hal-02489433 [3] V. P. Martin, J.-L. Rouas, and P. Philip, ‘Détection de la somnolence dans la voix : nouveaux marqueurs et nouvelles stratégies’, Trait. Autom. Lang., vol. 61, no. 2, p. 24, 2020. [4] V. P. Martin, J.-L. Rouas, F. Boyer, and P. Philip, ‘Automatic Speech Recognition Systems Errors for Objective Sleepiness Detection Through Voice’, in Interspeech 2021, Aug. 2021, pp. 2476– 2480. doi: 10.21437/Interspeech.2021-291. [5] V. P. Martin, J.-L. Rouas, J.-A. Micoulaud-Franchi, P. Philip, and J. Krajewski, ‘How to Design a Relevant Corpus for Sleepiness Detection Through Voice?’, Front. Digit. Health, vol. 3, p. 124, 2021, doi: 10.3389/fdgth.2021.686068. [6] L. Dupuy, J.-A. Micoulaud-Franchi, and P. Philip, ‘Acceptance of virtual agents in a homecare context: Evaluation of excessive daytime sleepiness in apneic patients during interventions by continuous positive airway pressure (CPAP) providers’, J. Sleep Res., vol. n/a, no. n/a, p. e13094, 2020, doi: https://doi.org/10.1111/jsr.13094. [7] L. Dupuy et al., ‘Smartphone-based virtual agents and insomnia management: A proof-of-concept study for new methods of autonomous screening and management of insomnia symptoms in the general population’, J. Sleep Res., p. e13489, Sep. 2021, doi: 10.1111/jsr.13489. [8] P. Philip et al., ‘Trust and acceptance of a virtual psychiatric interview between embodied conversational agents and outpatients’, Npj Digit. Med., vol. 3, no. 1, Art. no. 1, Jan. 2020, doi: 10.1038/s41746-019-0213-y. [9] P. Philip et al., ‘Virtual human as a new diagnostic tool, a proof of concept study in the field of major depressive disorders’, Sci. Rep., vol. 7, 2017. [10] P. Philip, S. Bioulac, A. Sauteraud, C. Chaufton, and J. Olive, ‘Could a virtual human be used to explore excessive daytime sleepiness in patients?’, Presence Teleoperators Virtual Environ., vol. 23, no. 4, pp. 369–376, 2014. Work environment: The PhD student will be hosted at LaBRI in the Image and Sound (I&S) department with frequent visits to SANPSY where he/she will interact with the clinicians and the designers of the virtual agents. The I&S department conducts research in acquisition, processing, analysis, modeling, synthesis and interaction of audiovisual media. It works on the entire acquisition chain from data collection to information extraction or restitution of digital data with the user at the center of the chain. The spectrum of manipulated data is very wide: 2D and 3D images, video, speech, music, 3D data, EEG, pysiological data, etc. The different steps of the processing chain integrate modeling phases for analysis or synthesis. The targeted application domains are: health, medical, education, gaming, etc. The SANPSY unit has a recognized expertise in sleep restriction studies and in the evaluation of countermeasures to sleep deprivation. The team is also specialized in sleep disorders, especially obstructive sleep apnea diagnostic and treatment. The SANPSY unit is located on the neuro-psychopharmacological research platform (PRNPP). This platform is recognized nationally and internationally for its expertise in clinical research, simulation and virtual reality. It has been labeled IBISA in 2015. In 2011, SANPSY obtained an EquipEx project (PHENOVIRT) that aimed to improve phenotyping using simulation and virtual reality technologies. Part of this project, SANPSY has initiated, in particular, the development of Embodied Conversational Agents (virtual doctors and patients). Several scenarios for the diagnosis of drowsiness, depression and addiction to tobacco and alcohol have already been developed and tested in patients. Jean-Luc ROUAS CNRS Researcher Bordeaux Computer Science Research Laboratory (LaBRI) 351 Cours de la libération - 33405 Talence Cedex - France T. +33 (0) 5 40 00 35 28 www.labri.fr/~rouas
| |||
6-23 | (2022-04-15) Doc ou Postdoc au LIG, Grenoble, France Date limite de candidature : 30 avril 2022
Sujet de thèse ou de postdoctorat dans le cadre du projet Popcorn (projet collaboratif avec deux entreprises)
encadrée par Benjamin Lecouteux, Gilles Sérasset et Didier Schwab (Laboratoire d’Informatique de Grenoble, Groupe d’Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)
Titre : Peuplement OPérationnel de bases de COnnaissances et Réseaux Neuronaux
Le projet aborde le problème de l’enrichissement semi-automatisé d’une base de connaissance au travers de l’analyse automatique de textes. Afin d’obtenir une innovation de rupture dans le domaine du Traitement Automatique du Langage Naturel (TALN) pour les clients sécurité et défense, le projet se focalise sur le traitement du français (même si les approches retenues seront par la suite généralisables à d’autres langues). Les travaux aborderont différents aspects :
● L’annotation automatique de documents textuels par la détection de mentions d’entités présentes dans la base de connaissance et leurs désambiguïsation sémantique (polysémie, homonymie) ;
● La découverte de nouvelles entités (personnes, organisations, équipements, événements, lieux), de leurs attributs (âge d’une personne, numéro de référence d’un équipement, etc.), et des relations entre entités (une personne travaille pour une organisation, des personnes impliquées dans un événement, ...). Une attention particulière sera donnée au fait de pouvoir s’adapter souplement à des évolutions de l’ontologie, la prise en compte de la place de l’utilisateur et de l’analyste pour la validation/capitalisation des extractions effectuées.
Le projet se focalise autour des trois axes de recherches suivants :
● Génération de données synthétiques textuelles à partir de textes de référence ;
● La reconnaissance des entités d’intérêt, des attributs associés et des relations entre les entités.
● La désambiguisation sémantique des entités (en cas d’homonymie par exemple)
Profil recherché:
- Solide expérience en programmation & machine learning pour le Traitement Automatique de Langues (TAL), notamment l’apprentissage profond
- Master/Doctorat Machine Learning ou informatique, une composante TAL ou linguistique computationnelle sera un plus apprécié
- Bonne connaissance du français
Détails pratiques:
- Début de la thèse rentrée 2022
- Contrat doctoral à temps plein au LIG (équipe Getalp) pour 3 ans (salaire: min 1768€ brut mensuel)
- ou Contrat postdoctoral à temps plein au LIG (équipe Getalp) pour 20 mois (salaire: min 2395€ brut mensuel)
Environnement scientifique:
Comment postuler ?
Les candidatures doivent contenir : CV + lettre/message de motivation + notes de master + lettre(s) de recommandations; et être adressées à Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr), Gilles Sérasset (gilles.serasset@univ-grenoble-alpes.fr) et Didier Schwab (Didier.Schwab@univ-grenoble-alpes.fr)
| |||
6-24 | (2022-04-18) PhD studentship at the University of Edinburgh,UK Hi all,
Here is an offer for a PhD studentship in modelling the articulation of spoken utterances at the University of Edinburgh
The PhD work will be to implement computational models of human speech articulation planning. It will involve the development of software for testing theoretical assumptions, along with tests of software output. The work combines phonetic and phonology aspects, speech technology and motor control theory with programming and software development.
The deadline to apply is 15th May 2022.
Information about eligibility and the application process may be found at:
https://www.ed.ac.uk/ppls/linguistics-and-english-language/prospective/postgraduate/funding-research-students/erc-phd-studentship-articulation-spoken-utterances
Contacts:
- Alice Turk: a.turk@ed.ac.uk
- Benjamin Elie: benjamin.elie@ed.ac.uk
| |||
6-25 | (2022-04-15)PhD chez Orange, France Orange recrute un.e doctorant.e sur le sujet 'Deep learning pour le traitement conjoint du langage naturel et des connaissances'. L’objectif de la thèse est de proposer des solutions pour mutualiser le traitement de tâches de compréhension et génération du langage naturel. Il s’agira ainsi d’étudier la fusion progressive de diverses tâches mêlant langage naturel et langage(s) formel(s) de représentation ou manipulation de connaissances. Le contexte d’application sera tout d’abord celui d’énoncés isolés, puis celui de dialogues humain-machine où l’historique de discussion doit être pris en compte.
Détails et candidature via Orange Jobs : https://orange.jobs/jobs/offer.do?joid=111967&lang=FR
| |||
6-26 | (2022-04-17) PhD at University of Zurich, Switzerland Human knowledge is inherently multi-modal, and it is more than just a collection of isolated pieces of information, irrespective of the form of expression. Instead, it emerges from the interconnectedness of all of these information fragments. Knowledge graphs are a powerful way of capturing such interconnected knowledge. Such graphs are effective for storing and relating information that can easily be expressed in textual form, by assigning a simple text label to every node in a graph or relating them to literals represented using strings or blobs. However, they so far fail to capture the richness of information that is not easily expressed as a short piece of text.
| |||
6-27 | (2022-04-19) Postdoctoral Position at Columbia University in The City of New York, NY, USA Postdoctoral Position - Machine Learning and Digital Twins, Columbia University in The City of New York.
| |||
6-28 | (2022-04-20) PhD grant at the University of Edinburg, UK The University of Edinburghis looking for a PhD candidate to work on modelling the articulation of spoken utterances as part of Alice Turk's Advanced ERC grant. The 4 year PhD studentship at the University of Edinburgh will be fully funded by the grant. We are looking for candidates with decent programming skills, and an interest in speech analysis and modelling:
| |||
6-29 | (2022-04-23) Doctoral position : Acoustic to Articulatory Inversion by using dynamic MRI images, INRIA, Nancy, France Doctoral position : Acoustic to Articulatory Inversion by using dynamic MRI images Loria “Lorraine Research Laboratory in Computer Science and its Applications” is a research unit common to CNRS, the Université de Lorraine and INRIA. Loria gathers 450 scientists and its missions mainly deal with fundamental and applied research in computer sciences, especially the MultiSpeech Team which focuses automatic speech processing, audiovisual speech and speech production. IADI is a research unit common to Inserm the Université de Lorraine whose specialty is developing various techniques and methods to improve imaging of moving organs via the acquisition of MR images.
This PhD project founded by LUE (Lorraine Université d’Excellence) associates the Multispeech team and the IADI laboratory.
Start date is (expected to be) 1st October 2022 or as soon as possible thereafter.
Supervisors Yves Laprie, email yves.laprie@loria.fr Pierre-André Vuissoz, email pa.vuissoz@chru-nancy.fr
The project
Articulatory synthesis mimics the speech production process by first generating the shape of the vocal tract from the sequence of phonemes to be pronounced, then the acoustic signal by solving the aeroacoustic equations. Compared to other approaches to speech synthesis which offer a very high level of quality, the main interest is to control the whole production process, beyond the acoustic signal alone. The objective of this PhD is to succeed in the inverse transformation, called acoustic to articulatory inversion, in order to recover the geometric shape of the vocal tract from the acoustic signal. A simple voice recording will allow the dynamics of the different articulators to be followed during the production of the sentence. Beyond its interest in terms of scientific challenge, articulatory acoustic inversion has many potential applications. Alone, it can be used as a diagnostic tool to evaluate articulatory gestures in an educational or medical context.
Description of work
The objective is the inversion of the acoustic signal to recover the temporal evolution of the medio-sagittal slice. Indeed, dynamic MRI provides two-dimensional images in the medio-sagittal plane at 50Hz of very good quality and the speech signal acquired with an optical microphone can be very efficiently deconstructed with the algorithms developed in the MultiSpeech team (examples available on https://artspeech.loria.fr/resources/). We plan to use corpora already acquired or in the process of being acquired. These corpora represent a very large volume of data (several hundreds of thousands of images) and an approach for tracking the contours of articulators in MRI images which gives very good results was developed to process corpora. The automatically tracked contours can therefore be used to train the inversion. The goal is to perform the inversion using the LSTM approach on data from a small number of speakers for which sufficient data exists. This approach will have to be adapted to the nature of the data and to be able to identify the contribution of each articulator. In itself, successful inversion to recover the shape of the vocal tract in the medio-sagittal plane will be a remarkable success since the current results only cover a very small part of the vocal tract (a few points on the front part of the vocal tract). However, it is important to be able to transpose this result to any subject, which raises the question of speaker adaptation, which is the second objective of the PhD.
What we offer
Supervisors Yves Laprie, email yves.laprie@loria.fr Pierre-André Vuissoz, email pa.vuissoz@chru-nancy.fr
Application Your application including all attachments must be in English and submitted electronically by clicking APPLY NOW below. Please include:
log into Inria’s recruitment system (https://jobs.inria.fr/public/classic/en/offres/2022-04654) in order to apply to this position.
| |||
6-30 | (2022-04-26) Position of University Assistants (prae doc), University of Vienna, Austria
| |||
6-31 | (2022-04-30) 3 PhD fellowships at the University of Copenhaguen, Denmark 3 PhD fellowships in applied Machine Learning, Information Retrieval and Natural Language Processing The Information Retrieval Lab of the Department of Computer Science at the University of Copenhagen (DIKU) is offering 3 fully funded PhD Fellowships in applied Machine Learning, Information Retrieval, and Natural Language Processing, commencing 1 September 2022 or as soon as possible thereafter.
The fellows will conduct research, having as starting point the following broad research areas:
The deadline for applications is 19 May 2022, 23:59 GMT +2.
| |||
6-32 | (2022-05-06) PhD students, Postdoctoral Researchers and R&D Engineers at Telecom Paris, Palaiseau, France We have multiple openings for PhD students, Postdoctoral Researchers and R&D Engineers at Télécom Paris, Institut polytechnique de Paris, in the “Signal, Statistics and Learning (S2A) team.
All positions are located at Telecom Paris, 19 place Marguerite Perey, 91120 Palaiseau, France.
Start of the positions: October/November 2022 (for PhDs/Engineer), January 2023 for PostDoc
Subject: The positions will be a part of the ERC Advanced (2022) – HI-Audio (Hybrid and Interpretable Deep neural audio machines) project, which aims at building hybrid deep approaches combining parameter-efficient and interpretable models with modern resource-efficient deep neural architectures with applications in speech/audio scene analysis, music information retrieval and sound transformation and synthesis.
The potential topics include (and are not limited to): - Deep generative models, adversarial learning - Attention-based models and curriculum learning - Statistical/deterministic audio models (signal models, sound propagation models,…) - Music Information Retrieval software platform development (R&D Engineer position)
Candidate Profile: - For the Phd positions: A masters degree in applied mathematics, datascience/computer science or speech/audio/music processing is required. - For the Postdoc position: PhD degree and publications in theory or applications of machine learning, generative modelling, discrete optimal transport or signal processing, ideally with applications to Speech/Audio/Music signals. - Master internship positions will also be open in early 2023.
Télécom Paris, and the S2A team:
The S2A team gathers 18 permanent faculties covering a wide variety of research topics including Statistics, Probabilistic modeling, Machine learning, Data science, Audio and social signal processing. On the overall, Télécom Paris’ research counts 19 research teams and covers various domains in computer science and networks, applied mathematics, electronics, image, data, signals and economic and social sciences. Télécom Paris (https://www.telecom-paris.fr/en/home) is a member of IMT (Institut Mines-Télécom), and is a founding member of the Institut Polytechnique de Paris (IP Paris, https://www.ip-paris.fr/en), a world-class scientific and technological institution which is a partnership between five prestigious French engineering schools with HEC as a key partner.
Application: - In the application, please send a resume, a motivation letter (and full transcript grades for Phd/Engineer positions) to Gaël Richard, firstname.lastname@telecom-paris.fr. At least one reference letter will be asked in a second step.
|