ISCApad #313 |
Saturday, July 06, 2024 by Chris Wellekens |
6-1 | (2024-01-06) PhD Research Assistant @ DFKI Berlin, Germany PhD Research Assistant for Multimodal Fake-News and Disinformation Detection at DFKI Berlin
The German Research Center for Artificial Intelligence (DFKI) has operated as a non-profit, Public-Private-Partnership (PPP) since 1988. DFKI combines scientific excellence and commercially-oriented value creation with social awareness and is recognized as a major 'Center of Excellence' by the international scientific community. In the field of artificial intelligence, DFKI as Germany’s biggest public and independent organisation dedicated to AI research and development, has focused on the goal of human-centric AI for more than 30 years. Research is committed to essential, future-oriented areas of application and socially relevant topics. We are looking for a highly motivated research assistant to work on a project focused on fake-news and disinformation detection from speech and multimedia data. Content authenticity verification of speech combined with other modalities like text, visuals or meta-data will be a center part. In any case, xAI and bias analysis are aspects of high relevance to the position as well. The successful candidate will work closely with high-impact partners in this field, e.g. Technical University of Berlin, RBB (Berlin TV and news broadcaster), Deutsche Welle (Germany's broadcaster abroad), and 5 other partners. Responsibilities will include developing and testing different AI/NLP techniques, analyzing the performance of machine learning models in the context of applicable fake-news and disinformation fighting for journalists, and communicating project progress and results to relevant stakeholders. The position offers opportunities for pursuing a doctorate and publishing research results in scientific journals and conferences. Qualified candidates will have a completed university degree in (technical) computer science or computational linguistics, excellent programming skills in Python, and a strong background in machine learning/AI and signal processing or NLP. Previous experience in the field of fake-news or spoofing / authenticity detection of multimedia data is an advantage. DFKI offers an agile and lively international and interdisciplinary environment for working in a self-determined manner. If you are interested in contributing to cutting-edge research and working with a dynamic team, please apply! More details and link: https://jobs.dfki.de/en/vacancy/researcher-m-f-d-547585.html
Application deadline: Jan 23, 2024.
In terms of questions please don’t hesitate to contact tim.polzehl@dfki.de
| |||||||||||||||||||
6-2 | (2024-01-07) PhD position @ Laboratoire Bordelais de Recherche en Informatique (LaBRI), Talence, France Dans le cadre du projet PEPR Santé numérique 'Autonom-Health' (Health, behaviors and autonomous digital technologies), le groupe de recherche en parole et langage du Laboratoire Bordelais de Recherche en Informatique (LaBRI) recherche des candidats pour un poste de doctorant entièrement financé (36 mois).
Le projet 'Autonom-Health' est un projet collaboratif sur la santé numérique entre SANPSY, LaBRI, LORIA, ISIR et LIRIS. Le résumé du projet 'Autonom-Health' peut être trouvé à la fin de cet e-mail. Les missions qui seront abordées par les candidats retenus figurent parmi ces tâches, en fonction du profil du candidat :
- Tâches de collecte de données de parole :
- Définition de scénarios pour la collecte de discours spontanés à l'aide d'Agents Sociaux Interactifs (SIAs).
- Collecte d'interactions patient/médecin lors d'entretiens cliniques.
- Tâches liées à la reconnaissance automatique de parole :
- Évaluer et améliorer les performances de notre système ASR end2end ESPNET sur des données réelles en français spontané enregistrées à partir de sujets sains et de patients.
- Adaptation du système ASR au domaine des entretiens cliniques.
- Transcription phonétique automatique / alignement à l'aide d'architectures end2end.
- Adapter les transcriptions pour les utiliser avec les outils d'analyse sémantique développés au LORIA.
- Tâches d'analyse de la parole :
- Analyse des biomarqueurs vocaux pour différentes maladies : adaptation de nos biomarqueurs définis pour la somnolence, recherche de nouveaux biomarqueurs ciblés pour des maladies spécifiques.
Salaire brut : environ 2044 €/mois
Date de début : octobre 2024
Qualifications requises : Master en traitement du signal / analyse de la parole / informatique
Compétences : Programmation Python, apprentissage statistique (apprentissage machine, apprentissage profond), traitement automatique des signaux/de la parole, excellente maîtrise du français (interactions avec des patients et des cliniciens français), bon niveau d'anglais scientifique.
Savoir-faire : Familiarité avec la boîte à outils ESPNET et/ou les outils d'apprentissage profond, connaissance de la conception de systèmes de traitement automatique de la parole.
Compétences sociales : Bonne capacité à s'intégrer dans des équipes multidisciplinaires, capacité à communiquer avec des non-experts.
Candidatures : Pour postuler, veuillez envoyer par e-mail à jean-luc.rouas@labri.fr un seul fichier PDF contenant un CV complet, une lettre de motivation (décrivant vos qualifications, vos intérêts de recherche et votre motivation pour postuler), les coordonnées de deux référents et des copies de diplômes et relevés de notes (Master, Licence).
--- In the framework of the PEPR Santé numérique “Autonom-Health” project (Health, behaviors and autonomous digital technologies), the speech and language research group at the Computer Science Lab in Bordeaux, France (LaBRI) and the LORIA (Nancy, France) are looking for candidates for a fully funded PhD position (36 months). The « Autonom-Health » project is a collaborative project on digital health between SANPSY, LaBRI, LORIA, ISIR and LIRIS. The abstract of the « Autonom-Health » project can be found at the end of this email.
The missions that will be addressed by the retained candidates are among these tasks, according to the profile of the candidate:
- Data collection tasks:
- Definition of scenarii for collecting spontaneous speech using Social Interactive Agents (SIAs) - Collection of patient/doctor interactions during clinical interviews - ASR-related tasks - Evaluate and improve the performances of our end2end ESPNET-based ASR system for French real-world spontaneous data recorded from healthy subjects and patients, - Adaptation of the ASR system to clinical interviews domain, - Automatic phonetic transcription / alignment using end2end architectures - Adapting ASR transcripts to be used with semantic analysis tools developed at LORIA - Speech analysis tasks - Analysis of vocal biomarkers for different diseases: adaptation of our biomarkers defined for sleepiness, research of new biomarkers targeted to specific diseases. The position is to be hosted at LaBRI, but depending on the profile of the candidate, close collaboration is expected either with the LORIA teams : « Multispeech » (contact: Emmanuel Vincent emmanuel.vincent@inria.fr) and/or the « Sémagramme » (contact: Maxime Amblard maxime.amblard@loria.fr). Gross salary: approx. 2044 €/month Starting date: October 2023
Required qualifications: Master in Signal processing / speech analysis / computer science Skills: Python programming, statistical learning (machine learning, deep learning), automatic signal/speech processing, excellent command of French (interactions with French patients and clinicians), good level of scientific English.
Know-how: Familiarity with the ESPNET toolbox and/or deep learning frameworks, knowledge of automatic speech processing system design.
Social skills: good ability to integrate into multi-disciplinary teams, ability to communicate with non-experts.
Applications: To apply, please send by email at jean-luc.rouas@labri.fr a single PDF file containing a full CV, cover letter (describing your personal qualifications, research interests and motivation for applying), contact information of two referees and academic certificates (Master, Bachelor certificates). —— Abstract of the « Autonom-Health » project: Western populations face an increase of longevity which mechanically increases the number of chronic disease patients to manage. Current healthcare strategies will not allow to maintain a high level of care with a controlled cost in the future and E health can optimize the management and costs of our health care systems. Healthy behaviors contribute to prevention and optimization of chronic diseases management, but their implementation is still a major challenge. Digital technologies could help their implementation through numeric behavioral medicine programs to be developed in complement (and not substitution) to the existing care in order to focus human interventions on the most severe cases demanding medical interventions. However, to do so, we need to develop digital technologies which should be: i) Ecological (related to real-life and real-time behavior of individuals and to social/environmental constraints); ii) Preventive (from healthy subjects to patients); iii) Personalized (at initiation and adapted over the course of treatment) ; iv) Longitudinal (implemented over long periods of time) ; v) Interoperated (multiscale, multimodal and high-frequency); vi) Highly acceptable (protecting users’ privacy and generating trustability).
The above-mentioned challenges will be disentangled with the following specific goals: Goal 1: Implement large-scale diagnostic evaluations (clinical and biomarkers) and behavioral interventions (physical activities, sleep hygiene, nutrition, therapeutic education, cognitive behavioral therapies...) on healthy subjects and chronic disease patients. This will require new autonomous digital technologies (i.e. virtual Socially Interactive Agents SIAs, smartphones, wearable sensors). Goal 2: Optimize clinical phenotyping by collecting and analyzing non-intrusive data (i.e. voice, geolocalisation, body motion, smartphone footprints, ...) which will potentially complement clinical data and biomarkers data from patient cohorts. Goal 3: Better understand psychological, economical and socio-cultural factors driving acceptance and engagement with the autonomous digital technologies and the proposed numeric behavioral interventions. Goal 4: Improve interaction modalities of digital technologies to personalize and optimize long-term engagement of users. Goal 5: Organize large scale data collection, storage and interoperability with existing and new data sets (i.e, biobanks, hospital patients cohorts and epidemiological cohorts) to generate future multidimensional predictive models for diagnosis and treatment. Each goal will be addressed by expert teams through complementary work-packages developed sequentially or in parallel. A first modeling phase (based on development and experimental testings), will be performed through this project. A second phase funded via ANR calls will allow to recruit new teams for large scale testing phase. This project will rely on population-based interventions in existing numeric cohorts (i.e KANOPEE) where virtual agents interact with patients at home on a regular basis. Pilot hospital departments will also be involved for data management supervised by information and decision systems coordinating autonomous digital Cognitive Behavioral interventions based on our virtual agents. The global solution based on empathic Human-Computer Interactions will help targeting, diagnose and treat subjects suffering from dysfunctional behavioral (i.e. sleep deprivation, substance use...) but also sleep and mental disorders. The expected benefits from such a solution will be an increased adherence to treatment, a strong self-empowerment to improve autonomy and finally a reduction of long-term risks for the subjects and patients using this system. Our program should massively improve healthcare systems and allow strong technological transfer to information systems / digital health companies and the pharma industry.
| |||||||||||||||||||
6-3 | (2024-01-11) PhD student @ LPL, Aix-Marseille University, France PhD position – Language Sciences
We are looking for a PhD student to work on the PROSOLANG project (“using PROSOdy to improve foreign LANGuage learning”) in the fields of phonetics, language teaching and psycholinguistics. The position is at the Laboratoire Parole et Langage of Aix Marseille Université in Aix-en-Provence. The project aims to provide computer-based cognitive training programs in the context of English phonetics classes to help francophone learners to overcome the difficulties they have with the perception of non-native melodic cues. The person working on this project will be responsible for designing, carrying out and testing the training programs as well as disseminating the results.
This 3-year position is fully funded by the A*midex fundation. The PhD student will be co-supervised by Amandine Michelas and Sophie Herment and will closely interact with Sophie Dufour as well as the S2S research team and the prosody group of the Laboratoire Parole et Langage. The planned start date is 1 September 2024.
Applicants must hold a master’s degree in English, language sciences, cognitive science or psychology or a related discipline at the beginning of the PhD. The candidate must have a strong and documented interest in phonetics and language teaching. Previous experience with English teaching is advantageous but is not a prerequisite. Previous experience in scientific research in a laboratory is also a plus. Fluency in oral and written English and French is required.
Interested candidates are requested to submit, by March 31,, 2024 at the latest, an application including: - a letter of motivation - a curriculum vitae (including e-mail address and contact telephone number), - any other relevant documents, all in a single pdf file sent to the following address: amandine.michelas@univ-amu.fr.
After an initial assessment of applications based on the application file, a sub-set of candidates will be selected for a second phase involving a selection interview. Interviews will take place during the month of April 2024. Successful candidates will be notified by e-mail and/or telephone. For any questions or further information, please email amandine.michelas@univ-amu.fr.
Doctorant en Sciences du langage
Nous recherchons un doctorant pour travailler sur le projet PROSOLANG (« utiliser la PROSOdie pour améliorer l'apprentissage des langues étrangères ») dans les domaines de la phonétique, de l'enseignement des langues et de la psycholinguistique. Le poste est à pourvoir au Laboratoire Parole et Langage d'Aix Marseille Université à Aix-en-Provence. Le projet vise à fournir des programmes d'entraînement cognitifs informatisés dans le cadre de cours de phonétique anglaise pour aider les apprenants francophones à surmonter les difficultés qu'ils ont avec la perception d’indices mélodiques non-natifs. La personne travaillant sur ce projet sera responsable de la conception, de la réalisation et du test des programmes d’entraînement ainsi que de la diffusion des résultats.
Ce poste, d'une durée de 3 ans, est entièrement financé par la fondation A*midex. Le doctorant sera co-encadré par Amandine Michelas et Sophie Herment et interagira étroitement avec Sophie Dufour ainsi qu'avec l'équipe de recherche S2S et le groupe de prosodie du Laboratoire Parole et Langage. La date de début prévue est le 1er septembre 2024.
Les candidats devront être titulaires d’un master de recherche en anglais, en sciences du langage, en sciences cognitives ou en psychologie ou dans une discipline connexe au début du doctorat. Le candidat doit avoir un intérêt fort et documenté pour la phonétique et l’enseignement des langues. Une expérience antérieure dans l'enseignement de l'anglais est un avantage mais n'est pas une condition préalable. Une expérience préalable en recherche scientifique en laboratoire est également un plus. La maîtrise de l'anglais et du français à l'oral et à l'écrit est nécessaire.
Les candidats intéressés sont priés de soumettre, pour le 31/03/2024 au plus tard, un dossier de candidature comprenant : - une lettre de motivation - un curriculum vitae (comprenant l'adresse e-mail et le numéro de téléphone de contact), - tout autre document pertinent, le tout dans un seul fichier pdf envoyé à l'adresse suivante : amandine.michelas@univ-amu.fr.
Après une première évaluation des candidatures sur la base du dossier de candidature, un sous-ensemble de candidats sera sélectionné pour une deuxième phase comprenant un entretien de sélection. Les entretiens auront lieu durant le mois d’avril 2014. Les candidats retenus seront informés par e-mail et/ou par téléphone.
Si vous avez des questions, n'hésitez pas à nous contacter à l’adresse suivante : amandine.michelas@univ-amu.fr.
| |||||||||||||||||||
6-4 | (2024-01-15) Full-time (100%) Research Assistant / Ph.D. Student position, Bielefeld University, Germany The Social Cognitive Systems Group at Bielefeld University is seeking applications for a
** Full-time (100%) Research Assistant / Ph.D. Student position **
to work in a newly established project on multimodal creativity in AI-based co-speech gesture
generation. The project is part of a newly established Collaborative Research Center (CRC
1646) on “Linguistic Creativity in Communication” funded by the German Research Agency
(DFG) for 4 years. The goal of the project is to investigate how co-speech gestures are employed
to support both speaker and listener when new linguistic constructions are invented to solve
a challenging situation in communication (e.g. referring to an entity for which no conventionalized
term is available and ordinary language productivity does not suffice). It will be carried out by the
Social Cognitive Systems Group (Prof. Stefan Kopp) in collaboration with the Psycholinguistics
Group (Prof. Joana Cholin) at Bielefeld University, and will encompass both experimental studies
with human speakers as well as the development of computational models (using machine learning
techniques) of speech-gesture use in such situations.
The announced position will be working for the computational part under supervision of Prof.
Stefan Kopp. The main task will be to extend the currently popular data-based accounts that predict
gestures from (prosodic and textual) information in a given speech input, to models that are able
to generate novel gestures that (1) meet communicative demands that are not met by the given
simultaneous speech or (2) mark and support the use of non-conventionalized creative language.
We will build on the group’s long-standing previous work on cognitive and linguistic models of
speech-gesture generation, as well as deep machine learning-based accounts of speech-driven
gesture synthesis. In addition, the research assistant/PhD student will carry out interdisciplinary
work with the psycholinguistic part of the project.
The duration of the position is about 3,5 years (until end of 2027). Salary is 100% TVL-E13 scale
(about 4.000,- EUR per month before taxes, depending on relevant work experience).
Bielefeld is the vibrant center of the region of East Westphalia and Germany’s greenest big city
with a lot of cultural, entertainment, and recreational opportunities. It is located in the center of
Germany, surrounded by beautiful forests, and connected to Germany’s high-speed rail system.
Bielefeld University is a strong research-oriented university with more than 20.000 students and
a famous commitment to interdisciplinary research. It hosts major research centers such as the
Center for Cognitive Interaction Technology (CITEC) or the Center for Interdisciplinary Research
(ZiF).
Application deadline is 25th of January, but later applications will be considered too until the
position has been filled.
If you are interested to learn more about the position, please get in contact with Stefan Kopp
For information on how to apply please refer to:
| |||||||||||||||||||
6-5 | (2024-01-20) Internship in ANR Project Revitalise, Telecom-Paris, France ANR Project «REVITALISE» Automatic speech analysis of public talks. Description. Today, humanity has reached a stage at which extremely important aspects (such as information exchange) are tied not only to actual so-called hard skills, but also to soft skills. One such important skill is public speaking. Like many forms of interaction between people, the assessment of public speaking depends on many factors (often subjectively perceived). The goal of our project is to create an automatic system which can take into account these different factors and evaluate the quality of the performance. This requires understanding which elements can be assessed objectively and which vary depending on the listener [Hemamou, Wortwein, Chollet21]. For such an analysis, it is necessary to analyze public speaking at various levels: high-level (audio, video, text), intermediate (voice monotony, auto-gestures, speech structure, and etc.) and low-level (fundamental frequency, action units, POS / tags, and etc.) [Barkar]. This internship offers an opportunity to analyze the audio component of a public speech. The student is asked to solve two main problems. The engineering task is to create an automatic speech transcription system that detects speech disfluency. To do this, it is proposed to collect a bibliography on this topic and come up with an engineering solution. The second, research task, is to use audio cues to automatically analyze the success of a performance of a talk. This internship will give you an opportunity to solve an engineering problem as well as learn more about research approaches. By the end you will have expertise in audio processing as well and machine learning methods for multimodal analysis. If the internship is successfully completed, an article may be published. PhD position fundings on Social Computing will be accessible in the team at the end of the internship (at INRIA). Registration & Organisation. Name of organization: Institut Polytechnique de Paris, Telecom-Paris Website of organization: https://www.telecom-paris.fr Department: IDS/LTCI/ Address: Palaiseau, France Supervision. Supervision will include weekly meetings with the main supervisor and regular meetings (every 2-3 weeks) with co-supervisors. Telecom-Paris, 2023-2024 ANR Project «REVITALISE» Name of supervisor: Alisa BARKAR Name of co-supervisor: Chloe Clavel, Mathieu Chollet, Béatrice BIANCARDI Contact details: alisa.barkar@telecom-paris.fr Duration & Planning. The internship is planned as a 5-6 month full-time internship for the spring semester 2024. 6 months considers 24 weeks within which it will be covered following list of activities: ● ACTIVITY 1(A1): Problem description and integration to the working environment ● ACTIVITY 2(A2): Bibliography overview ● ACTIVITY 3(A3): Implementation of the automatic transcription with detected discrepancies ● ACTIVITY 4(A4): Evaluation of the automatic transcription ● ACTIVITY 5(A5): Application of the developed methods to the existing data ● ACTIVITY 6(A6): Analysis of the importance of para-verbal features for the performance perception ● ACTIVITY 7(A7): Writing the report Selected references of the team. 1. [Hemamou] L. Hemamou, G. Felhi, V. Vandenbussche, J.-C. Martin, C. Clavel, HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews. in AAAI 2019, to appear 2. [Ben-Youssef] Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim. Ue-hri: a new dataset for the study of user engagement in spontaneous human-robot interactions. In Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 464–472. ACM, 2017. 3. [Wortwein] Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, and Stefan Scherer. 2015. Multimodal Public Speaking Performance Assessment. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15). Association for Computing Machinery, New York, NY, USA, 43–50. 4. [Chollet21] Chollet, M., Marsella, S., & Scherer, S. (2021). Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback. Journal on Multimodal User Interfaces, 1-13. 5. [Barkar] Alisa Barkar, Mathieu Chollet, Beatrice Biancardi, and Chloe Clavel. 2023. Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public Speaking. In Companion Publication of the 25th International Conference on Multimodal Interaction (ICMI '23 Companion). Association for Computing Machinery, New York, NY, USA, 51–55. https://doi.org/10.1145/3610661.3617161
Other references. 1. Dinkar, T., Vasilescu, I., Pelachaud, C. and Clavel, C., 2020, May. How confident are you? Exploring the role of fillers in the automatic prediction of a speaker’s confidence. In ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 8104-8108). IEEE. 2. Whisper: Robust Speech Recognition via Large-Scale Weak Supervision, Radford A. et al., 2022, url: https://arxiv.org/abs/2212.04356 3. Romana, Amrit and Kazuhito Koishida. “Toward A Multimodal Approach for Disfluency Detection and Categorization.” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023): 1-5. 4. Radhakrishnan, Srijith et al. “Whispering LLaMA: A Cross-Modal Generative Error Correction Framework for Speech Recognition.” ArXiv abs/2310.06434 (2023): n. pag. 5. Wu, Xiao-lan et al. “Explanations for Automatic Speech Recognition.” ICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2023): 1-5. 6. Min, Zeping and Jinbo Wang. “Exploring the Integration of Large Language Models into Automatic Speech Recognition Systems: An Empirical Study.” ArXiv abs/2307.06530 (2023): n. pag. 7. Ouhnini, Ahmed et al. “Towards an Automatic Speech-to-Text Transcription System: Amazigh Language.” International Journal of Advanced Computer Science and Applications (2023): n. pag. 8. Bigi, Brigitte. “SPPAS: a tool for the phonetic segmentations of Speech.” (2023). 9. Rekesh, Dima et al. “Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition.” ArXiv abs/2305.05084 (2023): n. pag. 10. Arisoy, Ebru et al. “Bidirectional recurrent neural network language models for automatic speech recognition.” 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2015): 5421-5425. 11. Padmanabhan, Jayashree and Melvin Johnson. “Machine Learning in Automatic Speech Recognition: A Survey.” IETE Technical Review 32 (2015): 240 - 251. 12. Berard, Alexandre et al. “End-to-End Automatic Speech Translation of Audiobooks.” 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (2018): 6224-6228. 13. Kheir, Yassine El et al. “Automatic Pronunciation Assessment - A Review.” ArXiv abs/2310.13974 (2023): n. pag. Telecom-Paris, 2023-2024
| |||||||||||||||||||
6-6 | (2024-01-20) Professeur d'informatique, GETALP, Université de Grenoble,France Un poste de PR en informatique (section 27) sera ouvert en 2024 à l'Université Grenoble
| |||||||||||||||||||
6-7 | (2024-01-24) Enquêteur/Enquêtrice spécialisé(e) en analyse audio avec une compétence en traitement automatique de la parole et de l’audio (H/F), BEA, Paris, France
Bureau d'enquêt et d'analyse pour la sécurité de l'aviation civile (BEA) Poste à pourvoir : Enquêteur/Enquêtrice spécialisé(e) en analyse audio avec une compétence en traitement automatique de la parole et de l’audio (H/F) Titre du poste : Enquêteur spécialisé « AUDIO» Niveau recherché : Doctorat / M2 / Diplôme d’Ingénieur Localisation : BEA, Aéroport du Bourget
En France, le BEA est l’autorité responsable des enquêtes de sécurité dans l’aviation civile. Il participe également à de nombreuses enquêtes conduites à l’étranger. L'enquête de sécurité a pour seul objet de prévenir les accidents et les incidents en aviation civile. Elle comprend la collecte et l'analyse de renseignements, l'exposé des conclusions, y compris la détermination des causes et/ou des facteurs contributifs et, s'il y a lieu, l'établissement de recommandations de sécurité. Créé en 1946, le BEA relève du ministère en charge des transports.
Description de la mission proposée Dans le cadre de l’exploitation des informations factuelles, nous recherchons un·e enquêteur/enquêtrice spécialisé·e en analyse de l’audio avec une compétence forte dans les domaines du traitement automatique de la parole et de l’audio. Votre mission combinera la partipation aux enquêtes de sécurité et au développement d’outils pour le laboratoire audio du BEA. Au sein du département Technique , en votre qualité d’enquêteur spécialisé, vous participerez aux enquêtes de sécurité du BEA, suite aux accidents et incidents survenus à des aéronefs civils en France ou à l'étranger en tant qu’enquêteur spécialisé dans l’exploitation des données issues d’enregistrements audio et vidéo. Fort de vos compétences complémentaires, vous serez également en charge du maintien des moyens informatiques, de la conduite de projets de développement existants et de nouveaux projets pour les besoins spécifiques du laboratoire d’analyses audio. Ces besoins relèvent de problématiques nécessitant de maintenir et d’établir des collaborations avec des partenaires universitaires et industriels autour de thématiques telles que la transcription automatique de la parole (Français et Anglais), la segmentation audio, l’évaluation objective de la qualité audio, l’analyse bas niveau de la parole et l’identification automatique de transitoires. Vos activités principales seront: - Lecture d’enregistreurs de vol (« boites noires ») et pilotage de séances d’écoutes, - Analyse acoustique des enregistrements audio comprenant l’identification des signatures acoustiques et la comparaison avec des échantillons sonores, - Réalisation de transcriptions des communications pour les enquêtes de sécurité, - Rédaction de documents techniques et participation à l'élaboration de rapports d'enquêtes, - Enrichissement et valorisation de la base de données audio du laboratoire, - Participation au maintien et à l’évolution des outils de détection et de localisation des balises acoustiques sous-marines, - Maintien et amélioration des outils informatiques du laboratoire d’analyses audio, - Animation du réseau scientifique du laboratoire audio, - Écriture et gestion de projets de recherche en collaboration avec des partenaires académiques. Vous participerez aux travaux d’analyse des enregistrements CVR (Cockpit Voice Recorder) dans le cadre d’enquêtes majeures en France et à l’étranger. À ce titre, vous devrez savoir faire preuve de rigueur, de discrétion, et appliquer de manière stricte les règles de confidentialité du BEA. Vous devrez être rapidement autonome pour devenir l’un ou l’une des référent·es techniques du département dans le domaine de l’analyse des données audio. Vous participerez également aux consultations industrielles dans le cadre du développement d’outils d’analyse audio-vidéo dédiés aux bureaux d’enquêtes. Vous serez amené·e à vous déplacer ponctuellement avec un préavis court en France et à l’étranger dans le cadre d’accident. Profil idéal Diplômé·e d'une formation Bac+5 ou Bac+8 avec une spécialisation et une expérience dans les techniques modernes de traitement automatique de la parole et de l’audio. Vous présentez les compétences suivantes : - Traitement de la parole et analyse de la voix, - Pilotage de projets en lien avec la recherche académique, - Programmation (Python, langage objet ou similaire) et Machine Learning/IA, - Traitement du signal, filtrage de données, - Analyse spectrale à l’aide de logiciels dédiés, - Connaissances de base en électronique numérique et analogique, - Qualités rédactionnelles en anglais et en français, - Acuité auditive adaptée au poste, - Maîtrise de l’Anglais (lu, écrit et parlé). Vous serez accompagné·e dans l’acquisition de certaines de ces compétences par le biais de formations. Vous êtes dynamique, rigoureux·se, curieux·se et créatif·ve. Vous désirez contribuer aux développements des compétences et des moyens techniques du laboratoire d’analyses audio du BEA. Vous savez travailler en équipe avec de bonnes qualités relationnelles et d’adaptation vous permettant de piloter des activités dans un contexte international. Votre esprit d'analyse et de synthèse vous permettent de communiquer vos résultats de manière efficace. Candidature Vous êtes invité·e à transmettre votre candidature constituée de votre CV ainsi qu’une lettre de motivation et tout document (recommandation) permettant de soutenir votre candidature par courriel à : recrutement-tec@bea.aero
| |||||||||||||||||||
6-8 | (2024-01-21) Maitre de conference à IRIT,Université de Toulouse 3, Paul Sabatier, France Un poste de MCF CNU27 va être publié sur Galaxie dans les jours à venir.
L'équipe de recherche IRIS de l'IRIT encourage les candidatures et invite les personnes intéressées à présenter leur travail en séminaire en ce début d'année.
Le profil est proche de :
- Recherche : grands modèles de langage (LLM), production et accès à l'information
- Enseignement : algorithmique, programmation, bases de données (cf. le programme national)
Ce poste MCF27 est affecté à l'Université Toulouse 3 - Paul Sabatier :
- Recherche : IRIT, département Gestion de données, équipe IRIS (d'autres équipes sont également associées)
- Enseignement : composante IUT, département informatique (où j'enseigne)
L'IRIT et l'IUT sont distants de 10 minutes à pied, sur un campus vert bordé par le Canal du Midi et connecté à la ville par métro. Pour plus de renseignement, contacter Guillaume Cabanac
| |||||||||||||||||||
6-9 | (2024-02-01) Several academic positions @ LORIA, Nancy, France 7 postes de MCF et 2 postes de PR section 27 sont ouverts à l’Université de Lorraine affectés au Loria. Parmi les thématiques fléchées, le traitement automatique de la parole et des langues occupe une place majeure en recherche via les équipes du département D4 du Loria et en enseignement via le Master TAL d'IDMC et la future formation de niveau Licence en TAL d'IDMC qui ouvrira à la rentrée 2024. Les candidats et candidates sont fortement invité·es à prendre contact avec le laboratoire et les composantes d’enseignement.
| |||||||||||||||||||
6-10 | (2024-02-02) Four-year funded PhD studentships at the University of Edinburgh, UK Four-year funded PhD studentships in Designing Responsible Natural Language Processing at the University of Edinburgh
The UKRI AI Centre for Doctoral Training (CDT) in Responsible and Trustworthy in-the-world Natural Language Processing (NLP) is inviting applications for fully-funded PhD studentships starting in September 2024 for our new Designing Responsible NLP integrated PhD training programme.
Natural Language Processing (NLP) is an area of AI operating at the intersections of computer science, linguistics, and interaction design that has rapidly jumped from the research lab to routine deployment in-the-world. Mature NLP systems offer powerful capabilities to create new products, services, and interactive experiences grounded in natural language, and underpin much of the current excitement around generative AI. However, they also bring significant challenges to responsible and trustworthy design, adoption and deployment.
Our students will gain the skills, knowledge and experience to study and design real-world applications of NLP that are responsible and trustworthy by design, in a highly interdisciplinary training environment hosted by the new Edinburgh Futures Institute. The training programme brings together world leading researchers at the University of Edinburgh in informatics, design, linguistics, speech science, psychology, law, philosophy, information science, and digital humanities, who will supervise students and guide them in their training and learning.
The CDT will be seeking to fund up to 12 studentships to start next academic year. We are looking for applicants with background in or related to: • Computer science, informatics and artificial intelligence • Design, human computer interaction and human centred computing • Language, linguistics and speech sciences • Law, governance and regulation • Digital Humanities and Information Science
These are just indicative, and we are interested in applicants who come from any background or discipline with relevant skills and expertise that connect to our five Training Areas. Our ambition is to recruit a diverse cohort of students coming from different disciplines and backgrounds, who are excited by the prospects of working with each-other and on real-world applications of NLP.
The deadline for applications is midnight (GMT) 11th March 2024.
To find out more information on the programme, funding available and its benefits take a look at the CDT website here: https://www.responsiblenlp.org/
Detail on how to apply can be found here: https://www.responsiblenlp.org/application-documents/
You can also register for our applicant webinars on the 12th and 13 February here: https://www.responsiblenlp.org/applicant-webinars/ - more dates will likely be added for later in February as well.
| |||||||||||||||||||
6-11 | (2024-02-05) 4 postes de doctorat, INRIA, Paris Dans le cadre de l'équipe parole d'Inria Défense & Sécurité, nous proposons quatre postes de postdoctorat/jeunes chercheurs en parole. Les liens sont donnés en fin de ce message.
Le cœur de l'équipe est située dans les locaux d'Inria-Paris (proche de la gare de Lyon et devant déménager sous peu place d'Italie).
Il est cependant possible d'être rattaché dans différents centres Inria sur le territoire, où l'équipe est déjà présente.
Les travaux sont réalisés dans le cadre d'une collaboration avec le Ministère des Armées, autour de la thématique du traitemement de l'information dans le renseignement.
Ils sont basés sur des données publiques, dans une logique de science ouverte permettant une publication aisée des résultats.
N'hésitez pas à me contacter pour plus d'information sur Inria Défense&Sécurité, sur l'équipe parole et/ou sur les postes proposés.
Cordialement
JF Bonastre
PS1 : les offres sont doublées pour correspondre à l’ancienneté des candidates et candidats. D'autres offres sont prévues d'ici à Septembre.
PS2 : Plusieurs offres de doctorat seront proposées sous peu, soit en collaboration avec d'autres équipes académiques soit directement dans Inria Défense&Sécurité (n'hsitez pas à me contacter dès maintenant).
PS3 : Des accueils dans le cadre de stages de master sont également possibles.
--
______________________________________________________________________ Jean-Francois BONASTRE Directeur de Recherche, Inria Défense&Sécurité Membre associé du LIA et Professeur des Universités, Avignon Université Membre honoraire de l'IUF Tel: +33/0 490843514 @jfbonastre
| |||||||||||||||||||
6-12 | (2024-02-10) Maitre de conférences IA, (LIUM) Université du Mans, France Poste de maitre de conférences à l'Université du Mans **Enseignement** --
| |||||||||||||||||||
6-13 | (2024-02-15) 2 Postdoctoral Researchers in Multimodal Interaction @Trinity College Dublin, Ireland. 2 Postdoctoral Researchers in Multimodal Interaction wanted in Trinity College Dublin, Ireland.
Two postdoc positions are available in the lab of Prof. Naomi Harte at Trinity College Dublin in Ireland. The positions are both in multimodal interaction and are part of a larger project which is a multidisciplinary exploration of speech-based interaction. One postdoc is focussed on understanding the nature of multimodal interaction (https://www.adaptcentre.ie/careers/postdoctoral-researcher-in-understanding-multimodal-interaction-eenh_rf01/). That person will have a background in an area like psycholinguistics/cognitive science/linguistics or similar. The second post is more looking for an engineer with experience in ASR or conversational analysis. They will be developing better neural architectures to exploit a deeper understanding of multimodality in speech (https://www.adaptcentre.ie/careers/postdoctoral-researcher-in-audio-visual-neural-architecture-eenh_rf02-2/). Both are 3-year posts. Ideally the posts would both begin in June 2024, but some flexibility may be possible.
Prof. Harte can be contacted by email (nharte@tcd.ie) if you want additional details about the post after reading the above links, but the formal application is via the links above.
| |||||||||||||||||||
6-14 | (2024-02-21) Academic positions at Avignon University, Avignon, France Vous trouverez ci-dessous les profils de postes ouverts à Avignon Université en section 27 pour la rentrée 2024 :
- 1 poste de MCF en section 27 rattaché à l’IUT département sciences des données côté enseignement et au LIA côté recherche (intégration possible à l’équipe «Speech and Language Group » )
- 3 postes d’ATER - enseignement en licence et master informatique, recherche au LIA (intégration possible à l’équipe «Speech and Language Group » )
N’hésitez pas à contacter les personnes concernées (information de contact dans les fiches de postes),
| |||||||||||||||||||
6-15 | (2024-02-25) Professeur en informatique (parole, IA) Université de Grenoble, France Un poste de PR en informatique (section 27) sera ouvert en 2024 à l'Université Grenoble
| |||||||||||||||||||
6-16 | (2024-03-10) PhD position @ CEREMA, Strasbourg, France Proposition d'une thèse 2024-2027
Diagnostiquer l’acoustique d’une salle grâce au traitement du signal et à l’apprentissage automatique
mots clefs : Acoustique – Bâtiment – Apprentissage Automatisé – Méthodes inverses Contexte : Les nuisances sonores sont citées comme première source de gêne par les populations et constituent un enjeu sanitaire et social important, contribuant notamment au stress, aux déficits d'attention en classe, ou aux acouphènes. La gêne est souvent liée à la mauvaise qualité acoustique de la salle due à une réverbération trop importante (cantine, piscine, crèche…). Dans le cadre de la réhabilitation acoustique des salles, la proposition d’une solution nécessite une bonne connaissance des caractéristiques géométriques et acoustiques de l’existant (dimension de la salle, absorption et diffusion de ses différents revêtements). Pour estimer ces paramètres inconnus, les acousticiens de terrain s’appuient sur des mesures du champ sonore combinées à des connaissances géométriques et acoustiques a priori du lieu et du dispositif utilisé (sources et microphones). L’estimation est typiquement effectuée par calage manuel et itératif des paramètres d’entrées de modèles acoustiques analytiques ou numériques sur les mesures. Le processus complet d’un diagnostic est donc long, coûteux et parfois imprécis selon les modèles utilisés. Face à ce constat, le développement de méthodes dites inverses permettant de remonter automatiquement aux paramètres acoustiques d’intérêt à partir de mesures audio seules constituerait une percée majeure pour l’acoustique du bâtiment, ouvrant la voie au développement d’outils plus simples, plus rapides et plus fiables à destination des acousticiens. Objectif : L’objectif de la thèse est le développement d’un système qui, à partir d’un nombre réduit de mesures acoustiques (ex : des réponses impulsionnelles « RI ») et de caractéristiques de salle connues (ex : ses dimensions approximatives), puisse automatiquement retrouver dans les mesures, les autres caractéristiques alors inconnues ayant induit le champ sonore (ex : absorption et diffusion des parois, puissance de la source …). Cette thèse vise à obtenir des percées méthodologiques sur ces problèmes inverses ouverts et difficiles, en combinant des approches novatrices issues des domaines du traitement du signal et de l’apprentissage automatique. Elle débloquera trois verrous clés. Verrou 1 : Nos premiers travaux ont permis de développer des méthodes inverses d’optimisation permettant, pour des conditions idéalisées, d’estimer l’absorption des parois d’une salle de géométrie supposée connue [1] ou, à l’inverse, d’estimer la géométrie d’une pièce aux parois idéales [2]. Le verrou majeur reste la généralisation à des cas plus réalistes intégrant : une modélisation fine de la réponse des équipements et des propriétés des parois (dépendance de la fréquence et de l'angle d'incidence, diffusion acoustique...), et l'incertitude sur la géométrie. Verrou 2 : Nos travaux actuels se scindent en deux approches : celles guidées par les données annotées simulées, consistant à apprendre un réseau de neurones (ex : [3]), et celles guidées par la physique, résolvant un problème inverse d’optimisation reposant sur un modèle acoustique idéalisé (ex : [1,2]). Un verrou important consiste à les hybrider. Cela passera par le renforcement du réalisme des simulateurs de RIs et des modèles acoustiques théoriques, l’utilisation possible de techniques auto-supervisées sur données nonannotées [4] et de techniques d’unrolling corrigeant les modèles physiques sous-jacents par apprentissage [5]. Verrou 3 : Le dernier reste celui du passage des RIs simulées au RIs réellement mesurées qui nécessitera d’adapter les méthodes d’apprentissage et d’optimisation issues des deux verrous précédents. [1] S. Dilungana, A. Deleforge, C. Foy, S. Faisan, Geometry-Informed estimation of surface absorption profiles from impulses responses, Eusipco, 30th European Signal Processing Conference, Belgrade, Serbia, 2022. [2] T. Sprunck, Y. Privat, C. Foy, A. Deleforge, Gridless 3D Recovery if Images Sources from Room Impulse Responses, preprint, 2022. [3] S. Dilungana, A. Deleforge, C. Foy, and S. Faisan, Learning-based estimation of individual absorption profiles from a single room impulse response with known positions of source, sensor and surfaces. In INTER-NOISE and NOISE-CON Congress and Conference Proceedings, vol. 263, No. 1, pp. 5623-5630. [4] A. Jaiswal, AR. Babu, MZ Zadeh, D. Banerjee, F. Makedon, A survey on contrastive self-supervised learning. Technologies. 2020, 28;9(1):2. [5] V. Monga, Y. Li and YC. Eldar, Algorithm unrolling: Interpretable, efficient deep learning for signal and image processing. IEEE Signal Processing Magazine. 2021 Feb 25;38(2):18-44. Apects pratiques : Le doctorant sera encadré par Antoine Deleforge (équipe TONUS*, Inria-Strasbourg), Sylvain Faisan (ICube**, Télécom-Physique Strasbourg) et Cédric Foy (UMRAE*** - Cerema de Strasbourg). Il sera physiquement au Cerema de Strasbourg (11 rue Jean Mentelin) mais pourra être amené à se déplacer ponctuellement pour se rendre aux deux autres laboratoires. * https://www.inria.fr/fr/tonus,** https://icube.unistra.fr/, *** https://www.umrae.fr/ Contacts : antoine.deleforge@inria.fr ou cedric.foy@cerema.fr
| |||||||||||||||||||
6-17 | (2024-03-19) Research position In Speech Synthesis Massive generation of TTS for Deepfake detection @IRISA, Lannion, France
CONTEXT Expression team from IRISA is hiring an engineer in Computer Science on a full-time 12 months contract (may be extended). Expression research team is at the heart of the AI revolution as it studies and generates Human language using different modalities, i.e. Text, Speech and Sign. In particular, the team participates to a project targeting the development and evaluation of deep fake speech detection systems. To this end, we have to implement a large variety of speech synthesis systems, including voice cloning systems and voice conversion systems. The engineer will work on the massive generation of synthesized speech in the context of deepfake detection. Team webpage: https://www-expression.irisa.fr/fr/ JOB DESCRIPTION Mission: Developpment of speech synthesis systems including a large variety of technologies, including voice cloning and voice conversion systems : • Data preparation for different languages ; • Set up a global framework for Text-To-Speech synthesis (TTS) ; • Implement different TTS systems for different languages ; • Contribute to the developement of deep fake detection systems ; Environnement: The recruited person will be integrated to the research team and will collaborate with the partner company. Required diploma: PhD in Computer Science, Master in Machine Learning or Master in Speech and Language Processing Required skills: Software engineering (C++, Python) ; Machine learning methods and tools (Tensorflow, PyTorch, Keras) ; Automatic Speech and Language Processing ; CI/CD. GENERAL INFO Where: IRISA Lab in Lannion, France When: As soon as possible (May 2024) Duration: 12 months (may be extended) Salary: Depending on experience Contacts: damien.lolive@irisa.fr, arnaud.delhay@irisa.fr, vincent.barreaud@irisa.fr
| |||||||||||||||||||
6-18 | (2024-03-21) PhD position @ LIUM, University Le Mans, France Title: Optimizing Human Intervention for Synthetic Speech Quality Evaluation: Active Learning for Adaptability Keywords: Active Learning, Synthetic Speech Quality Evaluation, Subjective Quality Modeling, Training Set Design for Domain Adaptation Context: The primary objective of Text-to-Speech (TTS), speech conversion and speech to speech translation system is to synthesize or generate a high-quality speech signal. Typically, the quality of synthetic speech is subjectively evaluated by human listeners. This listening test aims to assess the degree of similarity to human speech rather than machine-like speech. The main challenge in assessing synthetic speech quality lies in finding a balance between the cost and reliability of evaluation. When the cost of conducting a human listening test is high, an automatic quality evaluation may be less reliable. Additionally, the definition of quality can be varied in different perspectives [7]. The quality of TTS output can be described in terms of various aspects such as intelligibility, naturalness, expressiveness, and the presence of noise. Furthermore, fine differences between two signals cannot be precisely tracked through Mean Opinion Score (MOS) ratings [1]. Moreover, the evolution of TTS systems has altered the nature of quality evaluation. Significant improvements in synthetic speech quality have been made over the last decade [2]. And while in the past, the emphasis was on intelligibility in speech synthesis, today, the focus is more on the expressiveness of synthetic speech. Recent efforts toward the automatic evaluation of synthesized speech [4] have demonstrated the success of objective metrics when the domain, language, and system are limited. In addition to the evolution of TTS quality over time, studies such as [10] and [8] have emphasized the need for new data collection and annotation for domain and language adaptation. Objective: The main objective of this thesis is to propose an active learning approach [9], where human intervention should be minimum, for a subjective task such as automatic evaluation of synthetic speech quality. The core of this framework would be an objective model as synthetic quality predictors, which require a diverse and efficient training samples. The main goal is to efficiently collect and query data in order to improve the precision of synthetic quality prediction or adapt the synthetic quality predictors to new domains and new generation of systems. It is essential to address different aspects of quality, domain-specific requirements, and linguistic variation through the acquisition of new data or retraining models with a specific emphasis on targeted sample sets. The main goal is to efficiently collect and query data to minimize information gaps, ensuring a comprehensive dataset for adaptation in order to maximize the performance improvement. The main adaptations that will be investigated in this project are language (adapting a trained quality predictor to a new language) and expressive speech synthesis (adapting a trained naturalness predictor to an expressive speech quality predictor). This adaptation could potentially extend to different listeners and system types, e.g. systems with different acoustic models or vocoder. In this context, the data collection (synthesizing new samples) is cost-effective, which allows focusing on only query optimization to identify the most informative samples. In a secondary objective, we will focus on modeling listeners’ disagreements in quality evaluation. This objective aims to address the diverse perspectives on the perception of TTS quality. Furthermore, this objective will work towards personalized quality prediction for TTS based on listeners’ individual definitions of quality. Consequently, analysing challenging scripts can reveal remaining challenges in the Text-to-Speech field. References: [1] Joshua Camp et al. “MOS vs. AB: Evaluating Text-to-Speech Systems Reliably Using Clustered Standard Errors”. In: Interspeech. 2023, pp. 1090–1094. [2] Erica Cooper and Junichi Yamagishi. “How do Voices from Past Speech Synthesis Challenges Compare Today?” In: Proc. 11th ISCA Speech Synthesis Workshop (SSW 11). 2021, pp. 183–188. doi: 10.21437/SSW.2021-32. [3] Erica Cooper et al. “Generalization ability of MOS prediction networks”. In: ICASSP. IEEE. 2022, pp. 8442–8446. [4] Wen Chin Huang et al. “The VoiceMOS Challenge 2022”. In: Interspeech. 2022, pp. 4536–4540. doi: 10.21437/Interspeech.2022-970. [5] Georgia Maniati et al. “SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis”. In: Interspeech. 2022, pp. 2388–2392. doi: 10.21437/Interspeech.2022-10922. [6] Felix Saget et al. “LIUM-TTS entry for Blizzard 2023”. In: Blizzard Challenge Workshop. 2023. doi: hal.science/hal-04188761. [7] Fritz Seebauer et al. “Re-examining the quality dimensions of synthetic speech”. In: Proc. 12th ISCA Speech Synthesis Workshop (SSW2023). 2023, pp. 34–40. doi: 10.21437/SSW.2023-6. [8] Thibault Sellam et al. “SQuId: Measuring speech naturalness in many languages”. In: ICASSP. IEEE. 2023, pp. 1–5. [9] Burr Settles. “Active learning literature survey”. In: (2009). [10] Wei-Cheng Tseng, Wei-Tsung Kao, and Hung-yi Lee. “DDOS: A MOS Prediction Framework utilizing Domain Adaptive Pre-training and Distribution of Opinion Scores”. In: Interspeech. 2022, pp. 4541–4545. Host laboratory : LIUM Location : Le Mans, France Supervisors : Anthony Larcher, Meysam Shamsi Applicant profile : Candidate motivated by Artificial Intelligence, with Master's degree in Computer Science, Signal processing, Speech analysis or related fields Instructions for Application: Send CV + letter/message of motivation + master’s note to : meysam.shamsi@univ-lemans.fr and anthony.larcher@univ-lemans.f
| |||||||||||||||||||
6-19 | (2024-03-23) Post-doc @LIUM, University of Le Mans, France The LIUM is looking to recruit a post-doc to work on the development of interpretable transformer architectures.
| |||||||||||||||||||
6-20 | (2024-03-22) Collaborateurs pour annotation de corpus textuels @ELDA, Paris, France ELDA (Evaluations and Language resources Distribution Agency) recherche, dans le cadre de ses activités de production de ressources linguistiques, des annotateurs (F/H) natifs du français à temps plein pour l’annotation en entités nommés de documents textuels. La mission aura lieu dans les locaux d'ELDA (Paris 13e) et peut démarrer dès à présent. Profil recherché : • Natif du français et de nationalité française avec un très bon niveau de grammaire ; • Bonne maîtrise d'outils informatiques ; • Capacité à intégrer et suivre scrupuleusement des règles d’annotation ; • Bon niveau de culture générale afin de pouvoir reconnaître les entités. Modalités du contrat : • Rémunération SMIC +20% (13,98€) horaire ; • Fin du projet prévue pour septembre 2024 donc disponibilité juillet-août requise ; • Travail en présentiel obligatoire ; • CDD de 2 mois renouvelable. Candidature : • Envoyer un CV à <dylan@elda.org>
| |||||||||||||||||||
6-21 | (2024-03-27) Two funded PhD positions @Trinity College, Dublin, Ireland Two funded PhD positions to start in Sept 24 are proposed in Trinity College Dublin. Both are focused on multimodal speech-based interaction.
One is in the AI space:
while the other is more in the psycholinguistics space:
The positions are fully funded for 4 years for both EU and Non-EU students, and are part of a larger project here focussed on multimodal interaction. Please share with students you know that may be interested. Application is via the above links, with March 29th the closing date.
| |||||||||||||||||||
6-22 | (2024-03-31) Theses et stages @INRIA Defense et Sécurité Deux annonces de thèses et une de stage (pour les stages, il peut y avoir d'autres sujets) à Inria Défense&Sécurité :
* Détection et clustering de la langue parlée
* Description automatisée de scènes audio explicable et frugale
https://jobs.inria.fr/public/classic/clas/offres/2024-07410
Et stage correspondant : https://jobs.inria.fr/public/classic/fr/offres/2024-07412
| |||||||||||||||||||
6-23 | (2024-04-02) Proposition de these @ INRIA Nancy France INRIA Nancy propose une thèse sur la génération de la langue des signes à partir de la parole, à l’Université de Lorraine (à Nancy). Pour plus de détails et pour postuler, consultez : https://jobs.inria.fr/public/classic/fr/offres/2024-07443
| |||||||||||||||||||
6-24 | (2024-04-03) Two post-docs@Afeka Center of Language Processing (ACLP), Tel-Aviv, Israel Two 2 open postdoctoral positions on the topic of Spoofing-robust speaker verification. The aim of the research is to combine time and frequency domain information to achieve a better generalization robustness to new attacks. The positions are open at Afeka Center of Language Processing (ACLP), Tel-Aviv, Israel, to work with Prof.Itshak Lapidot and his colleagues from Ben-Gurion University. Candidates can contact Prof. Lapidot Prof. Itshak Lapidot | Researcher | ACLP – Afeka Center for Language Processing | Afeka Academic College of Engineering | Mobile: 052-8902471 |Tel: +972-3-7688793 | itshakl@afeka.ac.il
| |||||||||||||||||||
6-25 | (2024-04-15) PhD , Université d'Aix-Marseille, France Nous recherchons un doctorant en sciences cognitives pour travailler sur un volet du projet ANR-JCJC FRENCHMELO « La prosodie pour comprendre le langage : du traitement de la langue native à l’acquisition de langues étrangères ». Ce volet vise à investiguer la sensibilité des francophones natifs au marqueurs acoustiques de l’accentuation. La personne recrutée participera à la mise en place d’études électrophysiologiques prévues dans le cadre de ce projet. Cette personne sera également en charge de la collecte, l’analyse et la diffusion des données issues de ces études. Ce poste, d’une durée de trois ans, est à pourvoir au Laboratoire Parole et Langage d'Aix Marseille Université à Aix-en-Provence. Le doctorant sera encadré par Amandine Michelas et Sophie Dufour et sera intégré à l’équipe de recherche S2S et au Groupe Prosodie du Laboratoire Parole et Langage. La date de début prévue est le 1er octobre 2024.
Les candidats devront être titulaires d’un master de recherche en sciences cognitives, en psychologie cognitive ou en sciences du langage. Le candidat doit avoir un intérêt fort et documenté pour la phonétique et la psycholinguistique. Une expérience antérieure avec l’électrophysiologie est un avantage mais n'est pas une condition requise.
Les candidats intéressés sont priés de soumettre, pour le 30/06/2024 au plus tard, un dossier de candidature comprenant : - une lettre de motivation - un curriculum vitae (comprenant l'adresse e-mail et le numéro de téléphone de contact), - tout autre document pertinent, le tout dans un seul fichier pdf envoyé aux adresses suivantes : amandine.michelas@univ-amu.fr et sophie.dufour@univ-amu.fr
| |||||||||||||||||||
6-26 | (2024-04-30) PhD offer at IMT-Atlantique, Brest, France Summary • Subject: Language & vision-based mobility assistance for visually impaired people
• Keywords: Assistive Technologies, Visual Impairment, Computer Vision, Image Captioning • Research Unit: Lab-STICC (UMR CNRS 6285) • Team: RAMBO - Robot interaction, Ambient system, Machine learning, Behaviour, Optimization • Location: IMT Atlantique, Brest • Start: September/October 2024 • Duration: 3 years • Supervision: Panagiotis Papadakis, Christophe Lohr Full subject description: https://www.imt-atlantique.fr/sites/default/files/recherche/Offres%20de%20th%C3%A8ses/2024/2024_Language%20%26%20vision-based%20mobility%20assistance.pdf Application The candidate must hold (or is about to obtain) a Master Degree in Computer Science with theoretical and practical skills in AI algorithms and associated deep-learning tools (e.g. Pytorch), and a solid background in Computer Vision. The candidate should be fluent in English (working and publishing main language), but French speaking is an advantage (meetings with end-users representatives). A detailed application should be addressed to thesis-application-rambo@imt-atlantique.fr, including a cover letter, an up-to-date CV, transcripts of grades (last two years), and a list of referees. Deadline: 17 May 2024
| |||||||||||||||||||
6-27 | (2024-05-03) Post-doc à l' INA, Paris, France Dans le cadre du projet ANR Pantagruel, l’INA recrute au sein de son équipe de recherche un postdoc spécialisée en TAL pour un CDD de 18 mois. Le cadre des travaux proposé est l’analyse de transcriptions de flux audiovisuels pour l’évaluation de LLMs et leur utilisation en sciences sociales computationnelles. Il s’agit donc de reprendre et adapter des tâches de NLP / SLU au contexte particulier de ces contenus (langue orale, news, débats, talkshow, ...). Les principales tâches sur lesquelles il/elle sera amené à se pencher sont à déterminer parmi les suivantes : segmentation sémantique, détection d’événements médiatiques, extraction de citations, désambiguïsation d'entités nommées, analyse de sentiments, catégorisation, résumé automatique, détection de propos haineux et RAG. Pour ces tâches, il est prévu de mener de bout en bout la création de corpus (train et eval) avec les équipes de l’INA, le développement du code et l’évaluation sur plusieurs modèles de fondation, dont ceux issu du projet Pantagruel. Un accès à notre cluster de calcul ainsi qu’à AdAstra est prévu.
| |||||||||||||||||||
6-28 | (2024-05-05) 3 offres de doctorat à l'INRIA, France Inria ouvre trois offres de thèse sur l'IA vocale :
| |||||||||||||||||||
6-29 | (2024-05-14) Professorship (W3) in Language Technology, Saarland University, Germany Saarland University, Germany, is a campus university with an international focus and a strong research profile. With numerous internationally respected research institutes on campus and dedicated support for collaborative projects, Saarland University is an ideal environment for innovation and technology transfer. The German Research Center for Artificial Intelligence (DFKI) is Germany's leading application‐driven research institute with a core technology transfer mission. DFKI is currently the world's largest research centre for artificial intelligence operated as a public‐private partnership. DFKI maintains close collaborative ties with national and international companies and is firmly rooted in the worldwide scientific AI landscape.
| |||||||||||||||||||
6-30 | (2024-05-30) PhD candidate in speech sciences, University of Mons, Belgium The Metrology and Language Sciences Department (web.umons.ac.be/smsl/) of the University of Mons is looking for candidates to take up a post of PhD candidate (M/F) from August 1, 2024.
CANDIDATE PROFILE (M/F) :
- Entry level: 'Bac +5' (master 300 ECTS credits) at least ; - Initial training allowing access to doctoral studies organised by the Faculty of Psychology and Educational Sciences (Psychology, Educational Sciences, Speech Therapy, Linguistics) or by the Faculty of Medicine (in particular: ENT and neurology); - Solid skills in the field of speech and language sciences, as well as in statistical data processing and research methodology; - Good command of scientific English (oral and written); sufficient command of French; - Good teamwork skills, creativity, autonomy, rigour, scientific curiosity; - Additional assets: programming skills (knowledge of a language such as Python or R), clinical experience with patients with motor speech disorders, possession of a driving licence and a private vehicle.
JOB PROFILE :
The post holder (M/F) will contribute to the Department's research efforts in the area covered by the ARC EvalDY project described below. He/she will be preparing a doctoral thesis related to this project. They may be required to play a minor role in the department's teaching supervision activities.
Full-time research grant for a period of three years, renewable in one-year increments, with a starting date of 1st of August, 2024 at the earliest.
RECRUITMENT PROCEDURE:
Interested candidates are requested to submit, by June 26, 2024 at the latest, an application including : - a letter of motivation - a curriculum vitae (including e-mail address and contact telephone number), - transcripts of each year of higher education, - any other relevant documents, all in a single pdf file sent to the following address: veronique.delvaux@umons.ac.be After an initial assessment of applications based on the application file, a sub-set of candidates will be selected for a second phase involving a selection interview. Successful candidates will be notified by e-mail and/or telephone. The interviews will take place on July 4, 2024, in Mons or remotely via Teams.
PROJECT: Evaluation of voice and speech disorders in dysarthria: EvalDy
The general aim of the project is to contribute to the characterisation and assessment of voice and speech disorders in dysarthria. The objective assessment (via acoustic and articulatory measurements) of pathological speech production is a rapidly expanding field of research, particularly in the French-speaking world, and there are many challenges to be met.
In the first phase, the project aims to document the speech production of a large number of French-speaking Belgian dysarthric patients, both men and women, with diverse profiles in terms of the type of dysarthria and associated aetiology (Parkinson's disease, Wilson's disease, Huntington's disease, Friedreich's ataxia, multiple sclerosis, amyotrophic lateral sclerosis, Kennedy's disease, dysarthria after stroke or head trauma) and the degree of severity of the dysarthria (mild, moderate, severe).
The acoustic recordings concern all the participants, who will be asked to produce the 8 modules of the MonPaGe 2.0.s protocol (repetition of pseudowords, intelligibility task, pneumo-phonatory module, reading of text, spontaneous speech, production of verbal diadocokinesis, automatic series and sentences with varied prosodic contours), to which 3 additional modules will be added (specifically targeting nasal phenomena, glides and phonetic flexibility skills). Several sub-groups of participants will be invited to carry out some of the modules in an experimental setting that will enable acoustic measurements to be combined with physiological measurements in order to study certain specific phenomena (acoustics and nasometry for nasality; acoustics, electroglottography and aerodynamics for coordination between the laryngeal and supra-laryngeal systems; acoustics and ultrasound imaging for articulatory precision; acoustics and imaging by nasofibroscopy and stroboscopy for voice quality). Analysis of this large data set, in particular analysis of the relationships between acoustic and articulatory measurements, will aim to reduce the multiple acoustic measurements to a smaller number of reliable, robust indicators that can be used to characterise all the dimensions of dysarthric speech: laryngeal functioning, pneumo-phonatory behaviour (including intensity control), fluency, articulatory precision and gestural coordination, organisation of the vowel system, and aptitude for phonetic flexibility.
In a second phase, the project aims to use the acoustic indicators thus isolated to develop (i.e. design, operationalise, then assess the psychometric qualities and finally adapt) several assessment tools, each of which will be dedicated to meeting a more precise objective, defined either in relation to a research question or to a need identified in clinical practice.
The first objective concerns the sub-clinical signs of dysarthria in Parkinson's disease, and the possibility of using certain acoustic indices such as vocal biomarkers to assist clinicians in the early diagnosis of the disease. The second objective is to contribute to differential diagnosis, using a tool for acoustic assessment of speech production to distinguish between different subtypes of dysarthria, as well as between dysarthria and apraxia of speech. The third clinical objective concerns the temporal dynamics of the disease, viewed from an intra-individual perspective. The aim is to propose a tool that is suitable for longitudinal monitoring of dysarthric patients, once the diagnosis has been made. The fourth objective relates to a fundamental research question, that of characterising the evolution of dysarthria as a function of the degree of severity in the context of the retrogenesis hypothesis. The fifth objective concerns intelligibility. The aim is to produce a tool for assessing the intelligibility of dysarthric speech, which can be used in future work on the link between intelligibility, communicative efficiency and quality of life in dysarthric patients.
Prof. Véronique Delvaux, PhD Chercheur qualifié FNRS à l'UMONS Chargée de cours UMONS & ULB Service de Métrologie et Sciences du Langage SMSL Institut de Recherche en Sciences et Technologies du Langage IRSTL Local –1.7, Place du Parc, 18, 7000 Mons +3265373140 https://web.umons.ac.be/smsl/veronique_delvaux/ https://trends.levif.be/canal-z/entreprendre/z-science-14-06-23/
| |||||||||||||||||||
6-31 | (2024-05-30) Fully funded PhD positions,Univdersirty of Bordeaux (LaBRI), France In the framework of the PEPR Santé numérique “Autonom-Health” project (Health, behaviors and autonomous digital technologies), the speech is looking for candidates for a fully funded PhD position (36 months).
Gross salary: approx. 2044 €/month Starting date: October 2024 Candidate profile:
Required qualifications: Master in Signal processing / speech analysis / computer science
Skills: Python programming, statistical learning (machine learning, deep learning), automatic signal/speech processing, excellent command of French (interactions with French patients and clinicians), good level of scientific English. Know-how: Familiarity with the ESPNET toolbox and/or deep learning frameworks, knowledge of automatic speech processing system design. Social skills: good ability to integrate into multi-disciplinary teams, ability to communicate with non-experts. Previsional agenda:
The « Autonom-Health » project is a collaborative project on digital health between SANPSY, LaBRI, LORIA, ISIR and LIRIS. The abstract of the « Autonom-Health » project can be found at the end of this email. The missions that will be addressed by the retained candidates are among these tasks, according to the profile of the candidate: - Data collection tasks:
- Definition of scenarii for collecting spontaneous speech using Social Interactive Agents (SIAs) - Collection of patient/doctor interactions during clinical interviews - ASR-related tasks - Evaluate and improve the performances of our end2end ESPNET-based ASR system for French real-world spontaneous data recorded from healthy subjects and patients, - Adaptation of the ASR system to clinical interviews domain, - Automatic phonetic transcription / alignment using end2end architectures - Adapting ASR transcripts to be used with semantic analysis tools developed at LORIA - Speech analysis tasks - Analysis of vocal biomarkers for different diseases: adaptation of our biomarkers defined for sleepiness, research of new biomarkers targeted to specific diseases. Location: The position is to be hosted at LaBRI, but depending on the profile of the candidate, close collaboration is expected either with the LORIA teams : « Multispeech » (contact: Emmanuel Vincent emmanuel.vincent@inria.fr) and/or the « Sémagramme » (contact: Maxime Amblard maxime.amblard@loria.fr). The Laboratoire Bordelais de Recherche en Informatique (LaBRI) is a renowned research center known for its excellence in various fields of computer science, including algorithms, artificial intelligence, networks, and human-computer interaction. It boasts advanced technological resources and participates in numerous European and international research projects. PhD students benefit from a stimulating academic environment and enriching interdisciplinary collaborations. Located in Bordeaux, LaBRI offers a pleasant and dynamic living environment.
Applications: To apply, please send by email at jean-luc.rouas@labri.fr a single PDF file containing a full CV, cover letter (describing your personal qualifications, research interests and motivation for applying), contact information of two referees and academic certificates (Master, Bachelor certificates). —— Abstract of the « Autonom-Health » project: Western populations face an increase of longevity which mechanically increases the number of chronic disease patients to manage. Current healthcare strategies will not allow to maintain a high level of care with a controlled cost in the future and E health can optimize the management and costs of our health care systems. Healthy behaviors contribute to prevention and optimization of chronic diseases management, but their implementation is still a major challenge. Digital technologies could help their implementation through numeric behavioral medicine programs to be developed in complement (and not substitution) to the existing care in order to focus human interventions on the most severe cases demanding medical interventions. However, to do so, we need to develop digital technologies which should be: i) Ecological (related to real-life and real-time behavior of individuals and to social/environmental constraints); ii) Preventive (from healthy subjects to patients); iii) Personalized (at initiation and adapted over the course of treatment) ; iv) Longitudinal (implemented over long periods of time) ; v) Interoperated (multiscale, multimodal and high-frequency); vi) Highly acceptable (protecting users’ privacy and generating trustability). The above-mentioned challenges will be disentangled with the following specific goals: Goal 1: Implement large-scale diagnostic evaluations (clinical and biomarkers) and behavioral interventions (physical activities, sleep hygiene, nutrition, therapeutic education, cognitive behavioral therapies...) on healthy subjects and chronic disease patients. This will require new autonomous digital technologies (i.e. virtual Socially Interactive Agents SIAs, smartphones, wearable sensors). Goal 2: Optimize clinical phenotyping by collecting and analyzing non-intrusive data (i.e. voice, geolocalisation, body motion, smartphone footprints, ...) which will potentially complement clinical data and biomarkers data from patient cohorts. Goal 3: Better understand psychological, economical and socio-cultural factors driving acceptance and engagement with the autonomous digital technologies and the proposed numeric behavioral interventions. Goal 4: Improve interaction modalities of digital technologies to personalize and optimize long-term engagement of users. Goal 5: Organize large scale data collection, storage and interoperability with existing and new data sets (i.e, biobanks, hospital patients cohorts and epidemiological cohorts) to generate future multidimensional predictive models for diagnosis and treatment. Each goal will be addressed by expert teams through complementary work-packages developed sequentially or in parallel. A first modeling phase (based on development and experimental testings), will be performed through this project. A second phase funded via ANR calls will allow to recruit new teams for large scale testing phase. This project will rely on population-based interventions in existing numeric cohorts (i.e KANOPEE) where virtual agents interact with patients at home on a regular basis. Pilot hospital departments will also be involved for data management supervised by information and decision systems coordinating autonomous digital Cognitive Behavioral interventions based on our virtual agents. The global solution based on empathic Human-Computer Interactions will help targeting, diagnose and treat subjects suffering from dysfunctional behavioral (i.e. sleep deprivation, substance use...) but also sleep and mental disorders. The expected benefits from such a solution will be an increased adherence to treatment, a strong self-empowerment to improve autonomy and finally a reduction of long-term risks for the subjects and patients using this system. Our program should massively improve healthcare systems and allow strong technological transfer to information systems / digital health companies and the pharma industry. Jean-Luc ROUAS CNRS Researcher Bordeaux Computer Science Research Laboratory (LaBRI) 351 Cours de la libération - 33405 Talence Cedex - France T. +33 (0) 5 40 00 35 28 www.labri.fr/~rouas
| |||||||||||||||||||
6-32 | (2024-05-31) Poste contractuel d’enseignant-chercheur, Avignon Université / Laboratoire Informatique d’Avignon (LIA), France Un poste contractuel d’enseignant-chercheur est ouvert à plein temps au Centre d’Enseignement et de Recherche en Informatique, Avignon Université / Laboratoire Informatique d’Avignon (LIA) pour la rentrée prochaine 2024.
Côté recherche, la personne recrutée pourra intégrer l’une des deux équipes du LIA, dont l’équipe Speech and Language Group pour travailler sur ses différentes thématiques autour du traitement automatique de la parole et du langage.
Pour postuler, rendez-vous sur https://univ-avignon.fr/acces-rapide/recrutement-concours/personnels-enseignants/enseignant-contractuel/ ou directement sur https://recrutement.univ-avignon.fr/poste/LRU_27_EC_2024 pour accéder à la plateforme de recrutement. La fiche de poste est accessible sur le dernier lien.
ATTENTION : les délais de candidature sont extrêmement courts - jusqu’au 10 juin soir prochain.
| |||||||||||||||||||
6-33 | (2024-06- 03) Offre de these, GIPSA-Lab, Grenoble, France
Si :
- vous cherchez une thèse en sciences et technologies de la parole ;
- vous vous demandez si on peut prédire l'intonation de la voix à partir des lèvres, de la langue ou du visage ;
- vous vous demandez quelle serait la qualité d'une interaction orale avec quelqu'un utilisant ce système ;
- vous aimez l'apprentissage automatique, les expériences comportementales, et les montagnes
alors cliquez ici : https://www.gipsa-lab.grenoble-inp.fr/~olivier.perrotin/media/others/SilentPitch_PhD.pdf
En vous souhaitons bonne réception, n'hésitez pas à me contacter pour obtenir plus de détails,
et n'hésitez pas à diffuser à vos étudiants qui ne sont pas encore inscrits à la liste parole,
Olivier Perrotin
_________________________________________ Dr. Olivier Perrotin | Chargé de recherche CNRS CNRS / Grenoble INP / UGA GIPSA-lab, Département Parole et Cognition, équipe CRISSP 11 rue des Mathématiques – BP 46 38402 St Martin d’Hères Bâtiment B - Bureau B353 Tel: +33 (0)4 76 57 45 36 Web : http://www.gipsa-lab.grenoble-inp.fr/~olivier.perrotin/
| |||||||||||||||||||
6-34 | (2024-06-06) One year postdoc Naver Labs Europe We offer this 1y postdoc on LLM-based agents, to work with us on the UTTER EU Project
Come work with us on one or several of these topics: i] managing uncertainty and ambiguity ii] improving the use of conversational context iii] ensuring the safety and alignment of LLMs.
| |||||||||||||||||||
6-35 | (2024-06-07) Deux postes d'ingenieur à l'INRIA Nancy, France INRIA Nancy propose deux offres de postes d'ingénieur. Nous vous remercions de les transmettre aux potentielles personnes intéressées. Les candidats sont invités à postuler en ligne dès que possible. Les candidatures seront évaluées au fil de l'eau. Contexte : À travers le projet COLaF (Corpus et Outils pour les Langues de France), Inria a pour objectif de contribuer au développement de corpus et d’outils libres pour le français et les autres langues de France (alsacien, breton, corse, occitan, etc). La promotion et sauvegarde de ces langues dépend de la disponibilité des technologies linguistiques, mais ces langues sont largement ignorées par les industriels. Poste 1 : Ingénieur en Traitement des Langues et Développement de Modèles de reconnaissance de la parole La principale difficulté au développement de technologies linguistiques variées est le manque de données. En particulier, les données audio ont besoin d’une transcription pour la plupart des applications. Mais transcrire manuellement des données audio est coûteux en temps, nécessite la participation d’un.e locuteur.trice de bon niveau, et peut résulter en des données inconsistentes en l’absence d’orthographe standard. Afin d’augmenter la quantité de données audio annotées pour diverses langues de France, et de développer la première brique de chaines de traitement variées pour ces langues, nous souhaitons développer une chaine de traitement pour l’entrainement de systèmes de reconnaissance de la parole (ASR, automatic speech recognition). Pour plus d'information et postuler : https://jobs.inria.fr/public/classic/fr/offres/2024-07719
Poste 2 : Ingénieur en Traitement Linguistique et Développement de Modèles de synthèse de la parole L’un des souhaits exprimés par la communauté est un système de synthèse de la parole (TTS, text to speech) qui permettrait de créer facilement du contenu audio à partir de textes, et donc d’enrichir les média existants dans ces langues. Le système devra être adapté au contexte des langues peu dotées. Il devra être souple afin de s’adapter à des sources de données d’entrainement variant dans leur quantité et qualité. Il s’agit de types d’enregistrements variés : longues interviews, phrases isolées, émissions de TV, etc.
Pour plus d'information et postuler : https://jobs.inria.fr/public/classic/fr/offres/2024-07720
| |||||||||||||||||||
6-36 | (2024-05-28) Enseignant Université de Strasbourg, France La faculté des Lettres de l’Université de Strasbourg recrute pour la rentrée 2024 – 2025 un enseignant contractuel en linguistique française. La personne recrutée aura à assurer un service de 384 heures en licences sciences du langage et Lettres modernes et classiques. La rémunération est alignée sur la grille des enseignants du second degré en fonction du profil du candidat. Le contrat débutera au 1er septembre 2024 pour une durée d’un an. Nous essayons autant que possible donner plusieurs groupes de TD pour tenter de limiter la charge de travail de préparation mais cela reste un volume important en fin de thèse. Compte tenu du nombre d'heures à assurer, il n'est pas toujours possible de grouper les enseignements et il est donc probable que la personne recrutée ait cours jusqu'à 5 jours par semaine. Nous avons toutefois l'habitude de partager nos supports de cours et intégrer au mieux nos nouveaux collègues. L'offre est publiée et consultable sous https://www.unistra.fr/universite/travailler-a-luniversite/personnels-enseignants/enseignants-contractuels#c15883854 -- Camille Fauth Maître de conférences Vice présidente déléguée à l'Orientation - Transition secondaire / supérieur Responsable de la licence Sciences du Langage - Faculté des Lettres Responsable des stages du master Métiers de l'édition - Faculté des Lettres UR 1339 LiLPa - Université de Strasbourg
| |||||||||||||||||||
6-37 | (2024-06-05 )PhD student @ KTH, Stockholm, Sweden We are looking for a PhD student interested in Artificial Intelligence, Natural Language Processing and Speech Technology, that will work in a newly funded project at the Department of Speech, Music and Hearing at KTH. The project is financed by the Swedish AI-program WASP (Wallenberg AI, Autonomous Systems and Software Program), which offers a graduate school with research visits, partner universities, and visiting lecturers.
The newly started project is titled 'Thinking Fast and Slow: Real-time Speech Generation for Conversational AI'. The aim of the project is to develop AI-models capable of generating spoken responses in an incremental fashion, mirroring the nuanced and dynamic nature of human conversation. Our approach builds upon our previous pioneering efforts in the realm of incremental and predictive models for dialogue, which have laid the groundwork for this project.
The position is mainly a research position, with a small fraction of departmental duties (e.g. teaching).
Supervision: Professor Gabriel Skantze and Assoc. Prof. Gustav Eje Henter
https://www.kth.se/lediga-jobb/735886?l=en
| |||||||||||||||||||
6-38 | (2024-06-20) Research Fellow in Multimodal Neural Architecture, Trinity College, Dublin, IrelandResearch Fellow in Multimodal Neural Architecture
Please note the below is a shortened version of the full job specification. For more details please refer to the full Job Description document, which can be downloaded by clicking on the ‘Download full job spec’ button above. The Wider Research Project This Research Fellow is required to contribute to a new overall project led by Prof. Naomi Harte focused on the development of a unified multimodal framework for modelling and analysing real-world speech-based interaction. This Research Fellow will develop neural architectures for multimodal speech applications. The Research Fellow will rethink the development of sophisticated deep learning architectures that can fully exploit the relevant modalities of speech in an application. They will develop approaches that are agile in deployment and that can change how modalities combine in real-time. Applications will be in audio-visual speech recognition and conversational analysis. This work will be interdisciplinary in nature, requiring consideration of theories around conversation not only from a speech science and technology perspective, but also incorporating knowledge from established theories in the fields of psycholinguistics and cognitive science. Other elements of the project will focus on how to model multimodality in deep learning architectures. The overall team in this major project will consist of two Research Fellows (this position is one of those two), 4 PhD students, and one Research Assistant. The position is fully in-person and requires the person to be based in Dublin, Ireland. Qualifications Candidates appointed to this role must have completed a PhD in Electrical or Electronic Engineering, or a closely related field that makes them qualified to conduct this research in multimodal interaction. Note: Candidates who do not address the application requirements above will not be considered for interview. Further Information Informal enquiries about this post should be made to Professor Naomi Harte (nharte@tcd.ie) but applications are only accepted through the procedure outlined in the downloaded job spec document.
| |||||||||||||||||||
6-39 | (2024-06-22) PhD student, LIG, CNRS, Grenoble, France PhD Thesis: Interpretability and Evaluation of LLMs and Agentic Workflows
Starting date: November 1st, 2024 (flexible)
Salary: 2,135€ gross / month (social security included)
Place of work (no remote): Laboratoire d'Informatique de Grenoble, CNRS, Grenoble, France
Description:
Natural language processing (NLP) has undergone a paradigm shift in recent years, owing to the remarkable breakthroughs achieved by large language models (LLMs). These models have completely altered the landscape of NLP by demonstrating impressive results in language modeling, translation, and summarization. Nonetheless, the use of LLMs has also surfaced crucial questions regarding their reliability and transparency. As a result, there is now an urgent need to gain a deeper understanding of the mechanisms governing the behavior of LLMs, to interpret their decisions and outcomes in scientifically grounded ways, and to precisely evaluate their abilities and limitations. Adding to the complexity, LLMs are often involved as only one small component of larger, more ambitious, extit{agentic workflows} [SemEra]. In an agentic workflow, LLMs collaborate with other LLMs, humans, and tools by exchanging natural language messages to solve complex problems beyond the capabilities of an LLM alone.
Evaluation of LLMs has become particularly challenging as they consume most of the internet during their pre-training, including most of the test splits of evaluation benchmarks [LeakCheatRepeat]. Furthermore, the landscape of available LLMs is changing fast and they have access to web via tools as part of agentic workflows. Therefore, new evaluation methodologies beyond assessing models' skills on a fixed test set are needed to consider these novel properties [Flows].
A promising direction to carry out evaluation and interpretability analysis is to take inspiration from the field of Neuroscience which, over the years, has crafted experimental setups to undercover how the human brain computes and represents useful information for tasks of interest [RepEng]. Additionally, we can get help from causal analysis and causal inference toolkits [CausalAbstraction]. Examining the causal relationships between the inputs, outputs, and hidden states of LLMs, can help to build scientific theories about the behavior of these complex systems. Furthermore, causal inference methods can help uncover underlying causal mechanisms behind the complex computations of LLMs, giving hope to better interpret their decisions and understand their limitations [Glitch].
As a Ph.D student working on such a project, you will be expected to develop a strong understanding of the evaluation of complex systems, the principles of causal inference, and their application to machine learning. You will have the opportunity to work on cutting-edge research projects in NLP, contributing to the development of more reliable and interpretable LLMs. It is important to note that the Ph.D. research project should be aligned with your interests and expertise. Therefore, the precise direction of the research can and will be influenced by the personal taste and research goals of the student. It is encouraged that you bring your unique perspective and ideas to the table.
Skills:
Master degree in Natural Language Processing, computer science or data science.
Mastering Python programming and deep learning frameworks.
Experience in causal inference or working with LLMs
Very good communication skills in English, (proficiency in French not mandatory).
Scientific environment:
The thesis will be conducted within the Getalp teams of the LIG laboratory (https://lig-getalp.imag.fr/). The GETALP team has a strong expertise and track record in Natural Language Processing. The recruited person will be welcomed within the team which offer a stimulating, multinational and pleasant working environment.
The means to carry out the PhD will be provided both in terms of missions in France and abroad and in terms of equipment. The candidate will have access to the cluster of GPUs of both the LIG. Furthermore, access to the National supercomputer Jean-Zay will enable to run large scale experiments.
The Ph.D. position will be co-supervised by Maxime Peyrard and François Portet.
Additionally, the Ph.D. student will also be working with external academic collaborators at EPFL and Idiap (e.g., Robert West and Damien Teney) and external industry partners (Microsoft Research)
[SemEra] Maxime Peyrard, Martin Josifoski, Robert West, 'The Era of Semantic Decoding' 2024
[Flows] Martin Josifoski, Lars Klein, Maxime Peyrard, Nicolas Baldwin, Yifei Li, Saibo Geng, Julian Paul Schnitzler, Yuxing Yao, Jiheng Wei, Debjit Paul, Robert West 'Flows: Building Blocks of Reasoning and Collaborating AI' 2023
[LeakCheatRepeat] Simone Balloccu, Patrícia Schmidtová, Mateusz Lango, Ondrej Dušek 'Leak, Cheat, Repeat: Data Contamination and Evaluation Malpractices in Closed-Source LLMs' EACL 2024
[RepEng] Andy Zou, Long Phan, Sarah Chen, James Campbell, Phillip Guo, Richard Ren, Alexander Pan, Xuwang Yin, Mantas Mazeika, Ann-Kathrin Dombrowski, Shashwat Goel, Nathaniel Li, Michael J. Byun, Zifan Wang, Alex Mallen, Steven Basart, Sanmi Koyejo, Dawn Song, Matt Fredrikson, J. Zico Kolter, Dan Hendrycks 'Representation Engineering: A Top-Down Approach to AI Transparency'
[CausalAbstraction] Geiger, Atticus and Wu, Zhengxuan and Lu, Hanson and Rozner, Josh and Kreiss, Elisa and Icard, Thomas and Goodman, Noah and Potts, Christopher, 'Inducing Causal Structure for Interpretable Neural Networks' Proceedings of Machine Learning Research (2022): 7324-7338.
[Glitch] Giovanni Monea, Maxime Peyrard, Martin Josifoski, Vishrav Chaudhary, Jason Eisner, Emre Kıcıman, Hamid Palangi, Barun Patra, Robert West 'A Glitch in the Matrix? Locating and Detecting Language Model Grounding with Fakepedia' ACL 2024
| |||||||||||||||||||
6-40 | (2024-06-21) Ingénieur-e de recherche, TALEP, Laboratoire d'Informatique et Systèmes - LIS, Marseille, France l'équipe TALEP au LIS recherche un-e ingénieur-e de recherche à partir de début octobre.
| |||||||||||||||||||
6-41 | (2024-06-27) 3 postes enseignants-chercheurs, ENSSAT, Lannion, France 3 postes d'enseignants-chercheurs contractuels sont ouverts à l’ENSSAT Lannion, Université de Rennes pour la rentrée 2024. La partie recherche s’effectuera dans un des équipes de l'IRISA du site Lannionnais.
ATTENTION Les délais de candidature sont extrêmement courts (jusqu’au 08/07).
Pour plus d’infos et postuler en ligne:
| |||||||||||||||||||
6-42 | (2024-07-04) Two job opportunities @ University of Palemo, Italy Research opportunities at the University of Palermo, Italy - Prof. Siniscalchi
1) As part of the Doctoral Programs at the University of Palermo, Prof. Siniscalchi is seeking candidates for fully funded PhD positions (36 months) focused on speech-related topics, including speech enhancement, speech recognition, and speech for health.
Salary: The annual scholarship is €16,243 gross (Ministerial Decree No. 247 of 23 February 2022), which includes social security charges to be paid by the PhD student and is subject to the INPS social security contribution. How to apply and more infor: Interested candidates should contact Prof. Siniscalchi at sabatomarco.siniscalchi@unipa.it. Deadline: August 2nd.
(2) As part of the SHAPE-AD project at the University of Palermo, Prof. Siniscalchi is seeking candidates for a fully funded research position (12 months) focused on Speech and Handwriting Analysis to Predict Early Alzheimer’s Disease. Salary: The annual scholarship is €24,000 gross. How to apply or more info: Interested candidates should contact Prof. Siniscalchi at sabatomarco.siniscalchi@unipa.it.
| |||||||||||||||||||
6-43 | (2024-06-29) Junior professor in Spoken Language Technologies, KU Leuven, Belgium Open faculty position at KU Leuven, Belgium: junior professor in Spoken Language Technologies
KU Leuven's Faculty of Engineering Science has an open position for a junior professor (tenure track) in the area of Spoken Language Technologies. The successful candidate will conduct research on current challenges of speech technology and its applications, teach courses in the Master of Engineering Science and supervise students in the Master and PhD programs. The candidate will be embedded in the PSI research division of the Department of Electrical Engineering. More information is available at https://www.kuleuven.be/personeel/jobsite/jobs/60334358?lang=en. The deadline for applications is September 30, 2024.
|