ISCA - International Speech
Communication Association


ISCApad Archive  »  2022  »  ISCApad #286  »  Jobs

ISCApad #286

Sunday, April 10, 2022 by Chris Wellekens

6 Jobs
6-1(2021-11-12) Internships at IRIT, Toulouse, France

L’équipe SAMoVA de l’IRIT à Toulouse propose plusieurs stages en 2022 :

- Analyse des signaux IoT (audio et accéléromètre) issus du collier Sallis Médical en vue de la modélisation de l’efficacité pharyngolaryngée
- Discriminative sequence training in end-to-end automatic speech recognition
- Le traitement automatique de la parole et l’IA au service du diagnostic différencié de maladies neurodégénératives
- Self- and semi-supervised adaptation of neural speaker diarization
- Self-supervised audio representation learning
- ...

Tous les détails (sujets, contacts) sont disponibles dans la section 'Jobs' de l’équipe, onglet 'Internships' :
https://www.irit.fr/SAMOVA/site/jobs/

Back  Top

6-2(2021-11-13)Master internship at IRISA, Lannion, France
EXPRESSION Team IRISA LANNION - Proposal for an internship for a Research Master in Computer Science
 

Title: Joint training framework for text-to-speech and voice conversion

 

Text-to-speech and voice conversion are two distinct speech generation techniques. Text-to-speech  (TTS) is a process that generates speech from a sequence of graphemes or phonemes. Voice conversion is the conversion of speech from a source voice to a target voice. These processes find their application in domains such as Computer Assisted Language Learning, for example.
 
However, these two processes share some bricks, particularly the vocoder that generates speech from acoustic characteristics or the spectrogram. The quality of these two technologies has been significantly improved thanks to the availability of massive databases, the power of computing machines, and the deep learning paradigm implementation. On the other hand, restoring or controlling expressiveness, and more generally considering suprasegmental information, remains a major challenge for these two technologies.
 
This internship topic aims at setting up a common framework for both technologies. We aim at a joint deep learning framework to generate speech (target voice) from either speech (source voice) or text.
 
 
It will be supervised by members of the EXPRESSION team (IRISA): Aghilas Sini, Pierre Alain, Damien Lolive, and Arnaud Delhay-Lorrain.
 
Please send your application (CV + cover letter) before 10/01/2022 to  à aghilas.sini@irisa.fr,palain@irisa.fr, damien.lolive@irisa.fr, arnaud.delhay@irisa.fr
 
Start date: 01/02/2022 (flexible)
======================
Back  Top

6-3(2021-11-15) Stage en reconnaissance automatique de la parole chez Zaion, France

====== Offre de stage en reconnaissance automatique de la parole ========

 
Nous proposons une offre de stage au sein de Zaion, portant sur le développement de solutions de Reconnaissance Automatique de la Parole adaptées au contexte de la relation client, sur de nouvelles langues (niveau M2).

Merci de transférer si vous connaissez des étudiant.e.s à la quête de telle opportunité.

Description et contacts :
 
Back  Top

6-4(2021-11-26) Four research internships at SteelSeries France R&D team

The SteelSeries France R&D team (former Nahimic R&D team) is glad to open 4 research internship positions for 2022.

The selected candidates will be working on one of the following topics (more details in attached):
 
- Audio media classification
- Audio source classification
- Audio source separation
- Real-time speech restoration
 
Please reply/apply to nathan.souviraa-labastie@steelseries.com.

Audio media classification Master internship, Lille (France), 2022

Advisors — Pierre Biret, R&D Engineer, pierre.biret@steelseries.com — Nathan Souviraà-Labastie, R&D Engineer, PhD, nathan.souviraa-labastie@steelseries.com

Company description

SteelSeries is a leader in gaming peripherals focused on quality, innovation and functionality, and the fastest growing major gaming headset brand globally. Founded in 2001, SteelSeries improves performance through first-to-market innovations and technologies that enable gamers to play harder, train longer, and rise to the challenge. SteelSeries is a pioneer supporter of competitive gaming tournaments and eSports and connects gamers to each other, fostering a sense of community and purpose. Nahimic has joined the SteelSeries family in 2020 to bolster reputation of industry-leading gaming audio performance across both hardware and software. Nahimic is the leading 3D gaming audio software editor with more than 150 man-years of research and development in gaming industry. Their team gathers the rare combination of world class audio processing engineers and software geniuses based across France, Singapore and Taiwan. They are the worldwide leader in PC audio gaming software that are embedded in millions of gaming devices, from gaming headsets to the most powerful gaming PCs by brands such as MSI, Dell, Asus, Gigabyte, etc. Their technology offers the most precise and immersive sound for gamers that allows them to be more efficient in any game and have more immersive feeling. We wish to meet passionate people full of energy and motivations, ready to achieve great challenges to exhale everyone’s audio experience. We are currently looking for a AUDIO SIGNAL PROCESSING / MACHINE LEARNING RESEARCH INTERN to join the R&D team of SteelSeries’ Software & Services Business Unit in our French office (former Nahimic R&D team).

Subject

The target of the internship is to build a model able to classify audio streams into multiple media classes (classes description upon request).The audio classification problem will be addressed using supervised machine learning. Hence, the first step of the internship will be to collect data and build a balanced corpus for such an audio classification problem. Fortunately, massive audio content for most potential classes are available within the company and this task should not be an important burden. Once the corpus is built, the intern will have to either tune the parameters of an already existing internal model or develop a more adapted model from the state of the art [1] [2] [4] that still satisfy the «real-time» constraint. A more advanced step of the internship would be to define a more precise media type classification with for instance sub-types within a same category.Once the relevant classes have been identified, the intern will have to incorporate such changes in his classification algorithm and framework. As the intern will receive support to turn his model into an in-product real-time prototype, this internship is a rare opportunity to bring research to product in such a short time frame.

Skills

Who are we looking for ? Preparing an engineering degree or a master’s degree, you preferably have knowledge in the development and implementation of advanced algorithms for digital audio signal processing. Machine learning skills is a plus. 1 Whereas not mandatory, notions in the following various fields would be appreciated : Audio, acoustics and psychoacoustics - Audio effects in general : compression, equalization, etc. - Machine learning and artificial neural networks. - Statistics, probabilist approaches, optimization. - Programming language : Matlab, Python, Pytorch, Keras, Tensorflow. - Voice recognition, voice command. - Computer programming and development : Max/MSP, C/C++/C#. - Audio editing software : Audacity, Adobe Audition, etc. - Scientific publications and patent applications. - Fluent in English and French. - Demonstrate intellectual curiosity.

Other offers https://nahimic.welcomekit.co/
https://www.welcometothejungle.co/companies/nahimic/jobs

Références
[1] DCase Challenge Low-Complexity Acoustic Scene Classification with Multiple Devices. url : http: //dcase.community/challenge2021/task-acoustic-scene-classification-results-a.
[2] B. Kim et al. Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization. 2021. arXiv : 2111.06531 [cs.SD].
[3] Nahimic on MSI. url : https://fr.msi.com/page/nahimic.
[4] sharathadavanne. seld-dcase2021. https://github.com/sharathadavanne/seld-dcase2021. 2021


Audio source classification Master internship, Lille (France), 2022

 Advisors — Nathan Souviraà-Labastie, R&D Engineer, PhD, nathan.souviraa-labastie@steelseries.com — Pierre Biret, R&D Engineer, pierre.biret@steelseries.com

Company description

SteelSeries is a leader in gaming peripherals focused on quality, innovation and functionality, and the fastest growing major gaming headset brand globally. Founded in 2001, SteelSeries improves performance through first-to-market innovations and technologies that enable gamers to play harder, train longer, and rise to the challenge. SteelSeries is a pioneer supporter of competitive gaming tournaments and eSports and connects gamers to each other, fostering a sense of community and purpose. Nahimic has joined the SteelSeries family in 2020 to bolster reputation of industry-leading gaming audio performance across both hardware and software. Nahimic is the leading 3D gaming audio software editor with more than 150 man-years of research and development in gaming industry. Their team gathers the rare combination of world class audio processing engineers and software geniuses based across France, Singapore and Taiwan. They are the worldwide leader in PC audio gaming software that are embedded in millions of gaming devices, from gaming headsets to the most powerful gaming PCs by brands such as MSI, Dell, Asus, Gigabyte, etc. Their technology offers the most precise and immersive sound for gamers that allows them to be more efficient in any game and have more immersive feeling. We wish to meet passionate people full of energy and motivations, ready to achieve great challenges to exhale everyone’s audio experience. We are currently looking for a AUDIO SIGNAL PROCESSING / MACHINE LEARNING RESEARCH INTERN to join the R&D team of SteelSeries’ Software & Services Business Unit in our French office (former Nahimic R&D team).

Subject

The target of the internship is to build a model able to classify audio sources. And by audio sources, we mean sources present inside a predetermined given media such as music, movies or video games, e.g., instruments in the case of music. The audio classification problem will be addressed using supervised machine learning. The intern would not start his project from scratch as data and classification code from other projects can be re-used with minor adaptation (description upon request). Once the corpus is reshaped for classification, the intern will have to either tune the parameters of an already existing internal model or develop a more adapted model from the state of the art [1] [3] [5] that still satisfy strong real-time constraint. Multi-task approach A more advanced step of the internship would be to explore multi-task models. The two tasks of target would be 1/ the classification task that the intern would have previously addressed, 2/ the audio source separation task on the same data type. This is a very challenging machine learning problem, especially because the different tasks are heterogeneous (classification, regression, signal estimation), contrary to homogeneous multi-task classification where a classifier is able to address different classification task. Moreover, just a few study are targeting audio heteregeneous multi-task (exhaustive list from advisors knowledge [2, 6, 7, 4]). Potential advantages of the multi-task approach are performance improvement for the main/principal task and computational cost reduction in products, as several tasks are achieved at the same time. Previous internal bibliographic work and network architecture could be used as starting point for this approach.



Skills

Who are we looking for ? Preparing an engineering degree or a master’s degree, you preferably have knowledge in the development and implementation of advanced algorithms for digital audio signal processing. Machine learning skills is a plus. Whereas not mandatory, notions in the following various fields would be appreciated : Audio, acoustics and psychoacoustics - Machine learning and artificial neural networks. - Audio effects in general : compression, equalization, etc. - Statistics, probabilist approaches, optimization. - Programming language : Matlab, Python, Pytorch, Keras, Tensorflow. - Sound spatialization effects : binaural synthesis, ambisonics, artificial reverberation. - Voice recognition, voice command. - Voice processing effects : noise reduction, echo cancellation, array processing. - Computer programming and development : Max/MSP, C/C++/C#. - Audio editing software : Audacity, Adobe Audition, etc. - Scientific publications and patent applications. - Fluent in English and French. - Demonstrate intellectual curiosity.

Other offers

https://nahimic.welcomekit.co/ https://www.welcometothejungle.co/companies/nahimic/jobs

Références

[1] DCase Challenge Low-Complexity Acoustic Scene Classification with Multiple Devices. url : http: //dcase.community/challenge2021/task-acoustic-scene-classification-results-a.
[2] P. Georgiev et al. « Low-resource Multi-task Audio Sensing for Mobile and Embedded Devices via Shared Deep Neural Network Representations ». In : Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 1.3 (sept. 2017), 50 :1-50 :19.
[3] B. Kim et al. Domain Generalization on Efficient Acoustic Scene Classification using Residual Normalization. 2021. arXiv : 2111.06531 [cs.SD].
[4] H. Phan et al. « On multitask loss function for audio event detection and localization ». In : arXiv preprint arXiv :2009.05527 (2020).
[5] sharathadavanne. seld-dcase2021. https://github.com/sharathadavanne/seld-dcase2021. 2021.
[6] G. P. Stéphane Dupont Thierry Dutoit. « Multi-task learning for speech recognition : an overview ». In : 24th European Symposium on Artificial Neural Networks 1 (2016).
[7] D. Stoller, S. Ewert et S. Dixon. « Jointly Detecting and Separating Singing Voice : A Multi-Task Approach ». en. In : arXiv :1804.01650 [cs, eess] (avr. 2018). arXiv : 1804.01650.


Audio source separation Master internship, Lille (France), 2022

Advisors — Nathan Souviraà-Labastie, R&D Engineer, PhD, nathan.souviraa-labastie@steelseries.com — Damien Granger, R&D Engineer, damien.granger@steelseries.com

Company description

SteelSeries is a leader in gaming peripherals focused on quality, innovation and functionality, and the fastest growing major gaming headset brand globally. Founded in 2001, SteelSeries improves performance through first-to-market innovations and technologies that enable gamers to play harder, train longer, and rise to the challenge. SteelSeries is a pioneer supporter of competitive gaming tournaments and eSports and connects gamers to each other, fostering a sense of community and purpose. Nahimic has joined the SteelSeries family in 2020 to bolster reputation of industry-leading gaming audio performance across both hardware and software. Nahimic is the leading 3D gaming audio software editor with more than 150 man-years of research and development in gaming industry. Their team gathers the rare combination of world class audio processing engineers and software geniuses based across France, Singapore and Taiwan. They are the worldwide leader in PC audio gaming software that are embedded in millions of gaming devices, from gaming headsets to the most powerful gaming PCs by brands such as MSI, Dell, Asus, Gigabyte, etc. Their technology offers the most precise and immersive sound for gamers that allows them to be more efficient in any game and have more immersive feeling. We wish to meet passionate people full of energy and motivations, ready to achieve great challenges to exhale everyone’s audio experience. We are currently looking for a AUDIO SIGNAL PROCESSING / MACHINE LEARNING RESEARCH INTERN to join the R&D team of SteelSeries’ Software & Services Business Unit in our French office (former Nahimic R&D team).

Approaches and topics for the internship Audio source separation consists in extracting the different sound sources present in an audio signal, in particular by estimating their frequency distributions and/or spatial positions. Many applications are possible from karaoke generation to speech denoising. In 2020, our separation approaches [11, 1] were equaling the state of the art [12, 13] on a music separation task and many tracks of improvement are possible in terms of implementations (details hereafter). The selected candidate will work on one or several of the following topics according to her/his aspirations, skills and bibliographic outcomes. In addition to those topics, the candidates can also make their own topic proposal. She/he will also have the chance to work on our internal substantive datasets. New core algorithm Machine learning is a fast changing research domain and an algorithm can move from being state of the art to being obsolete in less than a year (see for instance the recent advances in music source separation [9, 3]). The task would be to try recent powerful neural network approaches like recent architectures or unit types that proved benefit in other research fields. For instance, the encoding and decoding part of [15] shows huge benefit compared to traditional audio codec. Other research domains outside audio (like computer vision) might be considered as sources of inspiration. For instance, the approaches in [14, 6] have shown promising results on other tasks and previous internal work [1] managed to bring those benefits to audio source separation. Conversely, approaches like [10, 5] were tested without benefit for the separation tasks that we target. 1 Overall, the targeted benefits of a new approach can be of two kinds, either to bring improvements in terms of audio separation performances, either to reduce the computational costs (mainly CPU/GPU load, RAM usage). Extension to multi-source Another challenging problem would be to estimate all the different sources with a single network, either by selecting wich source to output but with a single network such as in [7], either by outputting all sources at the same time. In the case of music, most of the state of the art approaches [12] had historically addressed the backing track problem (i.e., karaoke for instruments) as a one instrument versus the rest problem, hence using specific networks for each instruments when multiple instruments are present in the mix. Pruning Beside testing new architectures or unit types, pruning could be a simple and effective way to reduce computational costs. The original pruning principle is to remove the less influent neural units in order to avoid overfitting. We would mainly be interested in reducing the total amount of units and parameters.The theoretical and domain agnostic literature [16, 4, 8], as well as the audio specific literature [2] will be explored. As the selected candidate would work on our most advanced model, this subject is the opportunity to have a direct impact on the company in such a short time frame.

Skills

Who are we looking for ? Preparing an engineering degree or a master’s degree, you preferably have knowledge in the development and implementation of advanced algorithms for digital audio signal processing. Machine learning skills is a plus. Whereas not mandatory, notions in the following various fields would be appreciated : Audio, acoustics and psychoacoustics - Machine learning and artificial neural networks. - Audio effects in general : compression, equalization, etc. - Statistics, probabilist approaches, optimization. - Programming language : Matlab, Python, Pytorch, Keras, Tensorflow. - Sound spatialization effects : binaural synthesis, ambisonics, artificial reverberation. - Voice recognition, voice command. - Voice processing effects : noise reduction, echo cancellation, array processing. - Computer programming and development : Max/MSP, C/C++/C#. - Audio editing software : Audacity, Adobe Audition, etc. - Scientific publications and patent applications. - Fluent in English and French. - Demonstrate intellectual curiosity.

Other offers
https://nahimic.welcomekit.co/ https://www.welcometothejungle.co/companies/nahimic/jobs

Internship position at Telecom-Paris on Deep learning approaches for social computing

           

*Place of work* Telecom Paris, Palaiseau (Paris outskirt)

 

*Starting date* From February 2021(but can start later)

 

*Duration* 4-6 months

 

*Context*

The intern will take part in the REVITALISE projectfunded by ANR.

The research activity of the internship will bring together the research topics of Prof. Chloé Clavel [Clavel] of the S2a [SSA] team at Telecom-Paris– social computing [SocComp] - and Dr. Mathieu Chollet [Chollet] from University of Glasgow – multimodal systems for social skills training, and Dr Beatrice Biancardi [Biancardi] – Social Behaviour Modelling from CESI Engineering School, Nanterre.

 

Candidate profile*

As a minimum requirement, the successful candidate should have:

• A master degree in one or more of the following areas: human-agent interaction, deep learning, computational linguistics, affective computing, reinforcement learning, natural language processing, speech processing

 Excellent programming skills (preferably in Python)

 Excellent command of English

 The desire to do an academic thesis at Telecom-Paris after the internship

 

*How to apply*

The application should be formatted as **a single pdf file** and should include:

 A complete and detailed curriculum vitae

 A cover letter

 The contact of two referees

The pdf file should be sent to the two supervisors: Chloé Clavel, Beatrice Biancardi and Mathieu Chollet: chloe.clavel@telecom-paris.frbbiancardi@cesi.frmathieu.chollet@glasgow.ac.uk

 

           

Multimodal attention models for assessing and providing feedback on users’ public speaking ability

 

*Keywords* human-machine interaction, attention models, recurrent neural networks, Social Computing, natural language processing, speech processing, non-verbal behavior processing, multimodality, soft skills, public speaking

 

*Supervision* Chloé Clavel, Mathieu Chollet, Beatrice Biancardi

 

*Description* Oral communication skills are essential in many situations and have been identified as core skills of the 21st century. Technological innovations have enabled social skills training applications which hold great training potential: speakers’ behaviors can be automatically measured, and machine learning models can be trained to predict public speaking performance from these measurements and subsequently generate personalized feedback to the trainees.

The REVITALISE project proposes to study explainable machine learning models for the automatic assessment of public speaking and for automatic feedback production to public speaking trainees. In particular, the recruited intern will address the following points:

-   identify relevant datasets for training public speaking and prepare them for model training

-   propose and implement multimodal machine learning models for public speaking assessment and compare them to existing approaches in terms of predictive performance.

-   integrate the public assessment models to produce feedback a public speaking training interface, and evaluate the usefulness and acceptability of the produced feedback in a user study

The results of the project will help to advance the state of the art in social signal processing, and will further our understanding of the performance/explainability trade-off of these models.

 

The compared models will include traditional machine learning models proposed in previous work [Wortwein] and sequential neural approaches (recurrent networks) that integrate attention models as a continuation of the work done in [Hemamou], [BenYoussef]. The feedback production interface will extend a system developed in previous work [Chollet21].

 

Selected references of the team:

[Hemamou] L. Hemamou, G. Felhi, V. Vandenbussche, J.-C. Martin, C. Clavel, HireNet: a Hierarchical Attention Model for the Automatic Analysis of Asynchronous Video Job Interviews.  in AAAI 2019, to appear

[Ben-Youssef]  Atef Ben-Youssef, Chloé Clavel, Slim Essid, Miriam Bilac, Marine Chamoux, and Angelica Lim.  Ue-hri: a new dataset for the study of user engagement in spontaneous human-robot interactions.  In  Proceedings of the 19th ACM International Conference on Multimodal Interaction, pages 464–472. ACM, 2017.

[Wortwein] Torsten Wörtwein, Mathieu Chollet, Boris Schauerte, Louis-Philippe Morency, Rainer Stiefelhagen, and Stefan Scherer. 2015. Multimodal Public Speaking Performance Assessment. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction (ICMI '15). Association for Computing Machinery, New York, NY, USA, 43–50.

[Chollet21] Chollet, M., Marsella, S., & Scherer, S. (2021). Training public speaking with virtual social interactions: effectiveness of real-time feedback and delayed feedback. Journal on Multimodal User Interfaces, 1-13.

 

Other references:

[TPT] https://www.telecom-paristech.fr/eng/ 

[IMTA] https://www.imt-atlantique.fr/fr

[SocComp.] https://www.tsi.telecom-paristech.fr/recherche/themes-de-recherche/analyse-automatique-des-donnees-sociales-social-computing/

[SSA] http://www.tsi.telecom-paristech.fr/ssa/#

[PACCE] https://www.ls2n.fr/equipe/pacce/

[Clavel] https://clavel.wp.imt.fr/publications/

[Chollet] https://matchollet.github.io/

[Biancardi] https://sites.google.com/view/beatricebiancardi

-Rasipuram, Sowmya, and Dinesh Babu Jayagopi. 'Automatic multimodal assessment of soft skills in social interactions: a review.' Multimedia Tools and Applications (2020): 1-24.

-Sharma, Rahul, Tanaya Guha, and Gaurav Sharma. 'Multichannel attention network for analyzing visual behavior in public speaking.' 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). IEEE, 2018.

-Acharyya, R., Das, S., Chattoraj, A., & Tanveer, M. I. (2020, April). FairyTED: A Fair Rating Predictor for TED Talk Data. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 34, No. 01, pp. 338-345).

Back  Top

6-6(2021-12-18) Master or PhD internships

Hi,


You are in a Master or PhD program (in NLP or Speech proc.) and want to do an internship in 2022 co-supervised by and
This offer is for you ! https://tinyurl.com/intern-nle-sb (You can apply online from the Web link)
—————

Joint ASR and Repunctuation for Better Machine and Human Readable Transcripts 

 
Back  Top

6-7(2021-12-19) 2 research engineer positions ALAIA, IRIT, Toulouse, France

Afin de renforcer son équipe, le laboratoire Commun ALAIA, destiné à l'Apprentissage des langues Assisté par Intelligence Artificielle, propose deux postes d'ingénieurs de recherche (12 mois).

ALAIA est centré sur l'expression et la compréhension orale d'une langue étrangère cible (L2). En collaboration avec ses deux partenaires, académique (IRIT) et industriel (Archean Technologie) ainsi que des experts en didactiques de langues, les missions consisteront à concevoir, développer et intégrer des services innovants basés sur l'analyse des productions des apprenants L2, la détection et la caractérisation d'erreurs allant du niveau phonétique au niveau linguistique. Elles seront affinées en fonction du profil des personnes recrutées.

Les compétences attendues portent sur le traitement automatique de la parole et du langage ainsi que les méthodes de machine learning. 

Les candidatures sont à adresser à Isabelle Ferrané (isabelle.ferrane@irit.fr) et Lionel Fontan (lfontan@archean.tech). N'hésitez pas à nous contacter pour de plus amples informations. 

Back  Top

6-8(2021-12-26) Research associate and postdoc at Heriot-Watt University, Edinburgh, UK

1) Research Associate in Safe Conversational AI(re-advertising)

 
Closing date: 9th January 2022
 
?
 
We seek a candidate with experience in neural approaches to natural language generation, or closely related fields, including Vision + Language tasks.
 
Applicants interested in social computing tasks, such as online abuse detection and mitigation, as well as interdisciplinary candidates with a wider interest in ethical and social implications of NLP are also encouraged to apply.
 
The opportunity:
 
This is an exciting opportunity to work with a team developing safer AI methods bringing together AI researchers and researchers working on formal verification methods to researchers working on computational law. You will contribute your insight and experience into researching and developing deep learning methods for Conversational AI and closely related areas.
 
The project is led by Heriot-Watt University and in cooperation with the University of Edinburgh and Strathclyde, see https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/T026952/1
 
 
2) Postdoctoral Research Assistant in children's perceptions of technology
 
Closing date: 6th January 2022

?
 
We are looking for a creative and self-motivated researcher to investigate children?s knowledge and perceptions of conversational agents such as Alexa. The position is located at the University of Edinburgh's Moray House School of Education.
 
The opportunity:
 
This is an exciting opportunity to work with an interdisciplinary team of computer scientists and social psychologists at three Scottish universities on a project to address gender bias in conversational agents. You will contribute your insight and experience into researching technology with and for children to the team.
 
The project is led by Heriot-Watt University and in cooperation with the University of Edinburgh and Strathclyde, see https://gow.epsrc.ukri.org/NGBOViewGrant.aspx?GrantRef=EP/T024771/1
 
For any enquiries, please get in touch!
Prof. Verena Rieser 
Heriot-Watt University, Edinburgh
https://sites.google.com/site/verenateresarieser/
 
Back  Top

6-9(2022-01-10) Conference coordinator, ISCA.

The International Speech Communication (ISCA) [www.isca-speech.org)

seeks application for a :

 

Conference Coordinator (f/m/d)

 

For a limited-term contract with 18h/week, with the perspective of extension towards an unlimited-term contract.

 

The International Speech Communication Association (ISCA) is a scientific non-profit organization according to the French law 1901. The purpose of the association is to promote, in an international world-wide context, activities and exchanges in all fields related to speech communication science and technology. The association is aimed at all persons and institutions interested in fundamental research and technological development that aims at describing, explaining and reproducing the various aspects of human communication by speech, e.g. phonetics, linguistics, computer speech recognition and synthesis, speech compression, speaker recognition, aids to medical diagnosis of voice pathologies.

 

One of the core activities of ISCA is to ensure the continuous organization of its flagship conference, Interspeech. The conference is organized each year in a different country by a different team; it typically attracts 1500 or more participants from all over the world. The role of this newly-created position of conference coordinator is to ensure a smooth organization of the conference over the years, according to well-established standards, and taking into account the aims ISCA has with the conference.

 

The role requires – amongst others – to take over the lead for the following activities:

  • Support in the organization and set-up of the Technical Program
  • Operation and maintenance of the central electronic review system
  • Maintenance of the reviewer database
  • Support in the organization of the Technical Program Committee meeting
  • Maintenance of paper templates and author/presenter instructions
  • Support for the production of proceedings
  • Communication with the organizing team and possible Professional Conference Organizers about all matters of the conference organization, such as planning of calls, committees, meeting space, technical equipment and tools, session planning, event planning, etc.
  • Communication with the ISCA board about all matters of the conference
  • Communication with the ISCA Administrative Manager about membership matters, meetings, etc.
  • Outbound communication via web, social media, etc.

 

Required competences:

 

We are looking for a self-motivated person who is enthusiastic about the organization of international scientific events, and has excellent organizational and communication skills (mostly in English). The person does not need to have a scientific background in speech communication and technology, but should be able to understand the scientific background, as well as the aims ISCA has with the organization of Interspeech conferences. A proven expertise in the organization of large-scale events is a must, and of scientific events is a plus.

 

The job can be carried out remotely from any location. A flexible allocation of time over the year is required, depending on the status of preparations (the conference is typically organized in September, and there is an expected increase in activity from March to September). The willingness to physically attend preparatory meetings and the conference is required.

 

Deadline : 15 March 2022.

 

 

 

Back  Top

6-10(2022-01-28) ASSISTANT OR ASSOCIATE PROFESSOR IN SPEECH AND LANGUAGE TECHNOLOGY (tenure track) at Aalto University, Finland

Aalto opens a call for an assistant or associate professor in speech and language technology. 

https://aalto.wd3.myworkdayjobs.com/PrivateJobPosting/job/Otaniemi-Espoo-Finland/ASSISTANT-OR-ASSOCIATE-PROFESSOR-IN-SPEECH-AND-LANGUAGE-TECHNOLOGY--tenure-track-_R32437

Back  Top

6-11(2022-02-01) Two post-docs at ADAPT, Dublin, Irland
We are looking to recruit two Post-Doctoral Researchers to join the UCD team in the ADAPT Research Centre (www.adaptcentre.ie).  These projects are part of the ADAPT Digital Content Transformation Strand.
 
1) Harnessing Speech for Social Inclusion
Rather than focusing on accuracy and error rates, evaluation of speech recognition systems should be contextualized with respect to how well they perform in situations where the interlocutor is not the ‘typical’ native speaker (e.g. a senior citizen, a citizen with disabilities or a non-native speaker). Through publicly engaged research, data will be collected on how well existing speech technology is serving our citizens. A systematic evaluation of the output data produced by existing ASR systems and the interactions that arise when an ASR ‘converses’ with a range of users will enable a categorization of interaction issues and error patterns that need to be accounted for when developing applications which provide interfaces to essential services. This project will involve the development of a system which facilitates diagnostic evaluation of ASRs in a variety of interaction scenarios providing linguistic cues the augmentation of the ASR and building on the low-resource MT technologies being developed in other ADAPT projects.
 
2) Embedding of Multi-level Speech Representations
While deep learning has led to huge performance gains in speech recognition and synthesis, only recently more focus is being placed on what deep learning may be able uncover about the patterns which humans use intuitively when interacting via speech and which distinguish native from non-native speakers. Such patterns are typically the focus of speech perception and experimental phonetic studies. This project aims to build on the notion of multi-linear or multi-tiered representations of speech, creating embeddings of multiple (sub- word) levels of representation – phonetic features, phonemes, syllable pieces and syllables – enabling a closer investigation of systematicity and variability of speech patterns. This research will find application in non-native speech recognition, in speech adaptation/accommodation for native and non-native interactions and in pronunciation training scenarios.
 
ADAPT is the world-leading SFI research centre for AI Driven Digital Content Technology, coordinated by Trinity College Dublin and based within Dublin City University, University College Dublin, Technological University Dublin, Maynooth University, Munster Technological University, Athlone Institute of Technology, and the National University of Ireland Galway. ADAPT’s research vision is to pioneer new forms of proactive, scalable, and integrated AI-driven Digital Content Technology that empower individuals and society to engage in digital experiences with control, inclusion, and accountability with the long-term goal of a balanced digital society by 2030. ADAPT is pioneering new Human Centric AI techniques and technologies including personalisation, natural language processing, data analytics, intelligent machine translation human-computer interaction, as well as setting the standards for data governance, privacy and ethics for digital content.

ADAPT Digital Content Transformation Strand
From the algorithmic perspective, new machine learning techniques will both enable more users to engage meaningfully with the increasing volumes of content globally in a more measurably effective manner, while ensuring the widest linguistic and cultural inclusion. It will enhance effective, robust integrated machine learning algorithms needed to provide multimodal content experiences with new levels of accuracy, multilingualism and explainability.
 
Full details of the positions, requirements and link to submit applications can be found at:
 
 
The closing date is 17:00hrs (local Irish time) on Monday 14th February 2022 and and candidates must apply via https://www.ucd.ie/workatucd/jobs/  
 
The reference is 
 
1) 014028 for Harnessing Speech for Social Inclusion
2) 014079 for Embedding of Multi-level Speech Representations
 
Back  Top

6-12(2022-02-02) Professeur(-e), Sorbonne Université, Paris, France
Un poste de Professeure / Professeur des Universités en Intelligence artificielle : théorie et applications, est à pourvoir à Sorbonne Université avec une affection recherche dans un des laboratoires : ISIR, LIB, LIMICS ou LIP6. 
 
Professeure / Professeur des Universités 
Section 27 – Informatique 
Profil : Intelligence artificielle : théorie et applications

Date limite des candidatures au poste : le 04 mars 2022 à 16h

La personne recrutée contribuera significativement aux enseignements de Licence d’informatique dont les besoins couvrent l’ensemble de la discipline (algorithmique, programmation notamment objet, concurrente, fonctionnelle, web, mathématiques discrètes, structures de données, système, architecture, réseaux, compilation, bases de données, etc.) ainsi qu’au master d’informatique, en particulier pour les parcours ANDROIDE, BIM ou DAC.

Recherche :
Le poste est ouvert à tous les domaines de l’IA et de ses applications. La personne retenue intégrera l’un des laboratoires : ISIR, LIB, LIMICS ou LIP6 selon ses thématiques de recherche, et/ou de projets impliquant plusieurs laboratoires d’accueil au sein de SCAI (Sorbonne Center for Artificial Intelligence). La professeure ou le professeur devra être capable de coordonner des programmes collaboratifs nationaux et internationaux. La participation de la candidate ou du candidat, dans le passé, à des projets multidisciplinaires sera appréciée.

Lien vers la fiche de poste : https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/ListesPostesPublies/FIDIS/0755890V/FOPC_0755890V_391.pdf
 
 
En vous remerciant d’avance pour votre aide dans le partage de cette offre.
Bien cordialement, 



 
Back  Top

6-13(2022-02-11) 2 research fellowship grant for collaboration in research activities, Kore University of Enna - Enna (Sicily), Italy

A public selection procedure is called, based on qualifications and interview, for the assignment of n. 2 research fellowship grant for collaboration in research activities

 

Project main aim: Multidisciplinary Research on AI for Health.

Location: Kore University of Enna - Enna (Sicily), Italy

Funding Programme: Research Projects of National Relevance - PRIN 2017

 

Description: The project focuses on idiopathic Parkinson’s disease dysarthric speech, produced by speakers of two varieties of Italian that show different segmental (consonantal, vocalic) and prosodic characteristics. The project as a whole aims at: identifying phonetic features that impact on speech intelligibility and accuracy, separating variability due to dysarthria from features due to sociolinguistic variation, and developing perspectives and tools for clinical practice that take variation into account.

 

Duration: 12 months

 

Link to apply: https://unikore.it/index.php/it/contratti-di-ricerca/item/41282-d-p-n-33-2022-2-assegni-di-ricerca-presso-l-universita-degli-studi-di-enna-kore

 

Further information contact Prof. Sabato Marco Siniscalchi, E-mail: marco.siniscalchi-at-unikore.it

 

Back  Top

6-14(2022-02-08) PhD position at Delft University, The Netherlands

Job description

 

One of the most pressing matters that holds back robots from taking on more tasks and reach a widespread deployment in society is their limited ability to understand human communication and take situation-appropriate actions. This PhD positions is dedicated to addressing this gap by developing the underlying data-driven models that enable a robot to engage with humans in a socially aware manner.

This position is specifically targeted at the development of an argumentative dialogue system for human-robot interaction.  The PhD candidate will explore how to fuse multimodal behaviour to infer a person's perspective. The candidate will use, and further develop, reinforcement learning techniques in order to drive the robot's argumentative strategy for deliberating topics of current social importance such as global warming or vaccination.

The ideal candidate will have a keen interest in speech technology and reinforcement learning. He or she has strong interactive system background will design and run the experiments to evaluate the created hybrid-AI models through human-robot interaction.

Topics of interest:

1) long-term human-robot interaction

2) affective computing

2) NLP&argument-mining

 

Requirements

 
  • MSc in Computer Science or related field.
  • At least 3 years of programming experience in python (java  or C++is a plus).
  • Motivation to meet deadlines.
  • Affinitty to design and social science research.
  • Interest in collaborating with colleagues from Industrial Design.
  • Willingness to teach and guide students.
  • The ability to work in a team, take initiative, be results oriented and systematic.

Click here to apply:
Back  Top

6-15(2022-02-17) Postdoctoral position at INRIA, Bordeaux, France

Postdoctoral position in Speech Processing at INRIA, Bordeaux, France

 

Title: Glottal source inverse filtering for the analysis and classification of pathological speech

Keywords: Pathological speech processing, Glottal source estimation, Inverse filtering, Machine learning, Parkinsonian disorders, Respiratory diseases

Contact and Supervisor: Khalid Daoudi (khalid.daoudi@inria.fr)

INRIA team: GEOSTAT (geostat.bordeaux.inria.fr)

Duration: 13 months (could be extended)

Starting date: between 01/04/2022 and 01/06/2022 (depending on the candidate availability)

Application : via https://recrutement.inria.fr/public/classic/en/offres/2022-04481

Salary: 2653€/month (before taxes, net salary 2132€)

Profile: PhD thesis in signal/speech processing (or a solid post-thesis experience in the field) Required Knowledge and background: A solid knowledge in speech/signal processing; Basics of machine learning; Programming in Matlab and Python. Scientific research context During this century, there has been an ever increasing interest in the development of objective vocal biomarkers to assist in diagnosis and monitoring of neurodegenerative diseases and, recently, respiratory diseases because of the Covid-19 pandemic. The literature is now relatively rich in methods for objective analysis of dysarthria, a class of motor speech disorders [1], where most of the effort has been made on speech impaired by Parkinson’s disease. However, relatively few studies have addressed the challenging problem of discrimination between subgroups of Parkinsonian disorders which share similar clinical symptoms, particularly is early disease stages [2]. As for the analysis of speech impaired by respiratory diseases, the field is relatively new (with existing developments in very specialized areas) but is taking a great attention since the beginning of the pandemic. The speech production mechanism is essentially governed by five subsystems: respiratory, phonatory, articulatory, nasalic and prosodic. In the framework of pathological speech, the phonatory subsystem is the most studied one, usually using sustained phonation (prolonged vowels). Phonatory measurements are generally based on perturbations or/and cepstral features. Though these features are widely used and accepted, they are limited by the fact that the produced speech can be a product of some or all the other subsystems. The latter thus all contribute to the phonatory performance. An appealing way to bi-pass this problem is to try to extract the glottal source from speech in order to isolate the phonatory contribution. This framework is known as glottal source inverse filtering (GSIF) [3]. The primary objective of this proposal is to investigate GSIF methods in pathological speech impaired by dysarthria and respiratory deficit. The second objective is to use the resulting glottal parameterizations as inputs to basic machine learning algorithms in order to assist in the discrimination between subgroups of Parkinsonian disorders (Parkinson’s disease, Multiple-System Atrophy, Progressive Supranuclear Palsy) and in the monitoring of respiratory diseases (Covid-19, Asthma, COPD). Both objectives benefit from a rich dataset of speech and other biosignals recently collected in the framework of two clinical studies in partnership with university hospitals in Bordeaux and Toulouse (for Parkinsonian disorders) and in Paris (for respiratory diseases).

Work description GSIF consists in building a model to filter out the effect of the vocal tract and lips radiation from the recorded speech signal. This difficult problem, even in the case of healthy speech, becomes more challenging in the case of pathological speech. We will first investigate time-domain methods for the parameterization of the glottal excitation using glottal opening and closure instants. This implies the development of a robust technique to estimate these critical time-instants from dysarthric speech. We will then explore the alternative approach of learning a parametric model of the entire glottal flow. Finally, we will investigate frequency-domain methods to determine relationships between different spectral measures and the glottal source. These algorithmic developments will be evaluated and validated using a rich set of biosignals obtained from patients with Parkinsonian disorders and from healthy controls. The biosignals are electroglottography and aerodynamic measurements of oral and nasal airflow as well as intra-oral and sub-glottic pressure. After dysarthric speech GIFS analysis, we will study the adaptation/generalization to speech impaired by respiratory deficits. The developments will be evaluated using manual annotations, by an expert phonetician, of speech signals obtained from patients with respiratory deficit and from healthy controls. The second aspect of the work consists in manipulating machine learning algorithms (LDA, logistic regression, decision trees, SVM…) using standard tools (such as Scikit-Learn). The goal here will be to study the discriminative power of the resulting speech features/measures and their complementarity with other features related to different speech subsystems. The ultimate goal being to conceive robust algorithms to assist, first, in the discrimination between Parkinsonian disorders and, second, in the monitoring of respiratory deficit.

Work synergy - The postdoc will interact closely with an engineer who is developing an open-source software architecture dedicated to pathological speech processing. The validated algorithms will be implemented in this architecture by the engineer, under the co-supervision of the postdoc. - Giving the multidisciplinary nature of the proposal, the postdoc will interact with the clinicians participating in the two clinical studies.

References: [1] J. Duffy. Motor Speech Disorders Substrates, Differential Diagnosis, and Management. Elsevier, 2013. [2] J. Rusz et al. Speech disorders reflect differing pathophysiology in Parkinson's disease, progressive supranuclear palsy and multiple system atrophy. Journal of Neurology, 262(4), 2015. [3] P. Alku. Glottal inverse filtering analysis of human voice production – A review of estimation and parameterization methods of the glottal excitation and their applications. Sadhana – Academy Proceedings in Engineering Sciences. Vol. 36, Part 5, pp. 623-650, 2011

Back  Top

6-16(2022-02-15) Thèse CIFRE L3i La Rochelle-EasyChain



La jeune compagnie EasyChain et le laboratoire de recherche L3i de La Rochelle lancent un appel à candidatures pour un poste de doctorant.e CIFRE dans le domaine de développement des agents conversationnels.

Les détails de l'offre sont disponibles à cette adresse :
Dynamic Human-Agent Interactions adapted to users? profiles.

Le.la candidat.e retenu.e devra être titulaire d'un master ou d'un diplôme équivalent en informatique ou en traitement automatique des langues. Un solide bagage en apprentissage automatique et une bonne communication en anglais sont requis.

La thèse se déroulera principalement à Niort chez EasyChain, dans un environnement francophone.

Si vous êtes intéressé.e par ce poste, veuillez envoyer les informations suivantes à Antoine Doucet (antoine.doucet@univ-lr.fr),  et Ahmed Hamdi (ahmed.hamdi@univ-lr.fr):
 * CV détaillé
 * Diplômes de licence et de master.
* Lettres de recommandation

Les candidatures seront étudiées jusqu'au 1 mars 2022.

Bien cordialement

Antoine Doucet

Back  Top

6-17(2022-04-05) Junior professor position at Université du Mans, France
L?Université du Mans ouvre une Chaire Professeur Junior en traitement du langage multimodal.
 
Les candidatures sont ouvertes sur Galaxie à déposer avant le 2 mai 2022.
 

Projet de recherche / Description of the research project

L?objectif principal est de développer une IA de traitement du langage multimodale et multilingue qui repose sur un espace de représentation commun pour les modalités parole et texte dans différentes langues. Le ou la candidat.e devra développer ses activités de recherche afin de renforcer le caractère transverse de ces représentations à travers une combinaison pertinente de modalités (par ex. : vidéo et texte ou texte et parole), de tâches (par ex. : caractérisation du locuteur et synthèse de la parole, compréhension de la parole et traduction automatique, reconnaissance de la parole et synthèse de résumé?automatique) et de langues. Ses travaux de recherche tendront à?développer des systèmes automatiques intégrant l?humain au c?ur du traitement en utilisant des approches d?apprentissage actif et en explorant les problématiques d?expliquabilité?et d?interpretabilité afin de permettre à l?utilisateur naïf d?enseigner au système automatique ou d?en extraire des éléments compréhensibles. Ce projet visera également le renforcement de collaborations existantes (Facebook, Orange, Airbus) ou la création de nouveaux partenariats (Oracle, HuggingFace?). 

The research project should take place in the LST team goal that aims at developping a multimodal and multilingual representation space for speech and text modalities. The Junior Professor is expected to develop his/her own research diretions between the topics already existing in the LST team and to develop hybrid approaches by mixing for instance speaker characterization and speech synthesis or speech translation and speech understanding. He/She should also integrate the strategy of the team to involve the human in the loop for deep learning systems and work towards a better explainability/interpretability of speech processing algorithms.

Projet d'enseignement / Description of the teaching project 

Le ou la candidat.e intégrera l?équipe pédagogique du Master en intelligence artificielle du département d?informatique de l?UFR Sciences et Techniques de l?Université? du Mans. Son implication aura pour but de renforcer les compétences en apprentissage profond (apprentissage auto-supervisé, GANs, Transfomer, méthodologies et protocoles pour l?IA?) mais également dans les infrastructures dédiées à l?apprentissage automatique et aux sciences des données (calcul distribué, SLURM, MPI), l?utilisation d?un cluster de calcul (ssh, temux, jupyter-lab, conda) ou le cloud computing. Fort de compétences reconnues en traitement automatique du langage et de la parole l?équipe pédagogique souhaite élargir son offre de formation en adaptant les contenus à d?autres types de données (images, séquences temporelles générées par différents types de capteurs, graphes?) afin de répondre aux besoins spécifiques du tissu industriel local et régional en apprentissage automatique. Cette action s?inscrira dans la volonté de l?équipe pédagogique de développer l?apprentissage et la formation continue en lien avec les partenaires industriels mais également à destination d?un public académique de chercheurs et enseignant chercheurs non-informaticiens souhaitant développer des compétences en apprentissage automatique.

Teaching activities will take place within the Master of Computer Sciences and Artificial Intelligence from Le Mans University. The candidate is expected to strengthen the teaching on deep learning (self-supervised training, GANs, Transformers, machine learning methodology and protocols?) but also teach tools for distributed learning (SLURM, MPI, ssh, temux, jupyter-lab, conda?) and cloud computing. In mid terms, the candidate will contribute to the development of a  continuing learning in artificial intelligence adapted to the need of local companies and industry but also for researchers non-specialist in computer sciences. 

Conditions de candidature / Application requirements

être titulaire d'un doctorat  / hold a PhD

Pour les candidats exerçant ou ayant cessé d'exercer depuis moins de dix-huit mois une fonction l'enseignant-chercheur, d'un niveau équivalent à celui de l'emploi à pourvoir, dans un établissement d'enseignement supérieur d'un État autre que la France:  titres, travaux et tout élément permettant d'apprécier le niveau de fonction permettant d'accorder une dispense de doctorat. 

For candidates exercising or having ceased to exercise for less than eighteen months a function of teacher-researcher, of a level equivalent to that of the position to be filled, in a higher education establishment of a State other than France: titles, works and any element allowing to appreciate the level of function allowing to grant a dispence of doctorate.

Contact 

Antoine LAURENT

Antoine.laurent@univ-lemans.fr

Anthony LARCHER

Anthony.larcher@univ-lemans.fr

Back  Top

6-18(2022-03-17) un poste de doctorant(e) dans le domaine de la détection multimodale de deep fakes, IRISA, Lannion, France
L?équipe EXPRESSION de l?IRISA lance un appel à candidatures pour un poste de doctorant ou doctorante dans le domaine de la détection multimodale de deep fakes.

Les détails de l'offre sont disponibles à cette adresse : 
MUDEEFA - MUltimodal DeEEp Fake detection using Text-To-Speech Synthesis, Voice Conversion and Lips Reading
 
Le candidat ou la candidate devra mener des recherches appliquées de pointe dans un ou plusieurs des domaines suivants : traitement du signal, apprentissage automatique statistique, reconnaissance de la parole et des gestes. Il/elle devra posséder d'excellentes compétences en programmation informatique (par exemple C/C++, Python/Perl, etc.), et des connaissances en apprentissage automatique, en traitement du signal ou en interaction homme-machine.
Le poste nécessite d'être titulaire d'un master en informatique ou d'un diplôme d'ingénieur donnant le titre de master en informatique.

La thèse se déroulera à Lannion, dans les Côtes d?Armor, au sein de l?équipe EXPRESSION.

Merci d'envoyer un CV détaillé, une lettre de motivation, une ou plusieurs lettres de référence et les résultats académiques du diplôme précédent (Master ou Ingénieur donnant le titre de Master) à tous les contacts indiqués dans le sujet avant le vendredi 8 avril 2022, limite stricte.

Bien cordialement

Arnaud Delhay-Lorrain

 

Arnaud Delhay-Lorrain - Associate Professor
IRISA - Université de Rennes 1 
IUT de Lannion - Département Informatique
Rue Edouard Branly - BP 30219
F-22 302 LANNION Cedex
Back  Top

6-19(2022-03-18) 3 speech-to-speech translation positions available at Meta/Facebook FAIR
We are seeking research scientists, research engineers and postdoctoral researchers with expertise on speech translation and related fields to join our team.

FAIR?s mission is to advance the state-of-the-art in artificial intelligence through open research for the benefit of all. As part of this mission, our goal is to provide real-time, natural-sounding translations at near-human level quality. The technology we develop will enable multilingual live communication. We aim for our technology to be inclusive: it should support both written and unwritten languages. Finally, in order to preserve the authenticity of the original content, especially for more creativity related content, we aim to preserve non-lexical elements in the generated audio translations. Ideal candidates will have expertise on speech translation or related fields such as speech recognition, machine translation or speech synthesis. Please send email to juancarabina@fb.com with a CV if you are interested in applying.
Back  Top

6-20(2022-03-21) Thèse ou postdoc at Laboratoire d'Informatique de Grenoble, France
Sujet de thèse ou de postdoctorat dans le cadre du projet  Popcorn (projet collaboratif avec deux entreprises)
encadrée par Benjamin Lecouteux, Gilles Sérasset et Didier Schwab (Laboratoire d?Informatique de Grenoble, Groupe d?Étude en Traduction Automatique/Traitement Automatisé des Langues et de la Parole)


Titre : Peuplement OPérationnel de bases de COnnaissances et Réseaux Neuronaux


Le projet aborde le problème de l?enrichissement semi-automatisé d?une base de connaissance au travers de l?analyse automatique de textes. Afin d?obtenir une innovation de rupture dans le domaine du Traitement Automatique du Langage Naturel (TALN) pour les clients sécurité et défense, le projet se focalise sur le traitement du français (même si les approches retenues seront par la suite généralisables à d?autres langues). Les travaux aborderont différents aspects :
? L?annotation automatique de documents textuels par la détection de mentions d?entités présentes dans la base de connaissance et leurs désambiguïsation sémantique (polysémie, homonymie) ;
? La découverte de nouvelles entités (personnes, organisations, équipements, événements, lieux), de leurs attributs (âge d?une personne, numéro de référence d?un équipement, etc.), et des relations entre entités (une personne travaille pour une organisation, des personnes impliquées dans un événement, ...). Une attention particulière sera donnée au fait de pouvoir s?adapter souplement à des évolutions de l?ontologie, la prise en compte de la place de l?utilisateur et de l?analyste pour la validation/capitalisation des extractions effectuées.
Le projet se focalise autour des trois axes de recherches suivants :
? Génération de données synthétiques textuelles à partir de textes de référence ;
? La reconnaissance des entités d?intérêt, des attributs associés et des relations entre les entités.
? La désambiguisation sémantique des entités (en cas d?homonymie par exemple)


Profil recherché:
    - Solide expérience en programmation & machine learning pour le Traitement Automatique de Langues (TAL), notamment l?apprentissage profond
    - Master/Doctorat Machine Learning ou informatique, une composante TAL ou linguistique computationnelle sera un plus apprécié
    - Bonne connaissance du français


Détails pratiques:
    - Début de la thèse rentrée 2022
    - Contrat doctoral à temps plein au LIG (équipe Getalp) pour 3 ans (salaire: min 1768? brut mensuel)
    - ou Contrat postdoctoral à temps plein au LIG (équipe Getalp) pour 20 mois (salaire: min 2395? brut mensuel)



 

Environnement scientifique:


  • Le doctorat ou le postdoctorat sera mené au sein de l'équipe Getalp du laboratoire LIG  (https://lig-getalp.imag.fr/).
  • La personne recrutée sera accueillie au sein de l?équipe qui offre un cadre de travail stimulant, multinational  et agréable. 
  • Les moyens pour mener à bien le (post)doctorat seront assurés tant en ce qui concerne les missions en France et à l?étranger qu?en ce qui concerne le matériel (ordinateur personnel, accès aux serveurs GPU du LIG, Grille de calcul Jean Zay du CNRS).


Comment postuler ?

 

  • Pour postuler sur une thèse de doctorat, les candidats doivent être titulaires d'un Master en informatique, en apprentissage machine ou en traitement automatique du langage naturel (obtenu avant le début du contrat doctoral, les étudiants actuellement en master 2 peuvent ainsi postuler).
  • Pour postuler sur un postdoctorat, les candidats doivent être titulaires d'une thèse de doctorat en informatique,  en apprentissage machine ou en traitement automatique du langage naturel (obtenu avant le début du contrat doctoral, les étudiants dont la soutenance est prévue avant fin septembre 2022 peuvent ainsi postuler).
  • Ils doivent avoir une bonne connaissance des méthodes d?apprentissage automatique et idéalement une expérience en collecte et gestion de corpus.
  • Ils doivent également avoir une bonne connaissance de la langue française. 
Les candidatures doivent contenir : CV + lettre/message de motivation + notes de master + lettre(s) de recommandations; et être adressées à Benjamin Lecouteux (benjamin.lecouteux@univ-grenoble-alpes.fr), Gilles Sérasset (gilles.serasset@univ-grenoble-alpes.fr) et Didier Schwab (Didier.Schwab@univ-grenoble-alpes.fr
Back  Top

6-21(2022-04-04) PhD position at INRIA-LORIA, Nancy, France

2022-04676 - PhD Position F/M Nongaussian models for deep learning based audio signal processing Level of qualifications required :Graduate degree or equivalent Fonction : PhD Position Context The PhD student will join the Multispeech team of Inria,that is the largest French research group in the field of speech processing. He/she will benefit from the research environment and the expertise in audio signal processing and machine learning of the team, which includes many researchers, PhD students, post-docs, and software engineers working in this field. He/she will be supervised by Emmanuel Vincent (Senior Researcher, Inria), and Paul Magron (Researcher, Inria). Assignment Audio signal processing and machine listeningsystems have achieved considerable progress over the past years, notably thanks to the advent of deep learning. Such systems usually process a timefrequency representation of the data, such as a magnitude spectrogram, and model its structure using a deep neural network (DNN). Generally speaking, these systems implicitly rely on the local Gaussian model [1],that is an elementary statistical model for the data. Even though it is convenient to manipulate, this model builds upon several hypotheses which are limiting in practice: (i) circular symmetry, which boils down t o discarding the phase information (= the argument of the complex-valued time-frequency coefficients); (ii) independence of the coefficients, which ignores the inherent structure of audio signals (temporal dynamics, frequency dependencies); and (iii)Gaussian density, which is not observed in practice. Statistical audio signal modeling is an active research field. However, recent advances in this field are usually not leveraged in deep learning-based approaches, thus their potential is currently underexploited. Besides, some of these advances are not mature enough to be fully deployed yet. Therefore, the objective of this PhD is to design advanced statistical signal models for audio which overcome the limitations of the local Gaussian model, while combining them with DNN-based spectrogram modeling. The developed approaches will be applied to audio source separation and speech enhancement. Main activities The main objectives of the PhD student will be: 1. To develop structured statistical models for audio signals, which alleviate the limitations of the local Gaussian model. In particular, t he PhD student will focus on designing models by leveraging properties that originate from signal analysis, such as the temporal continuity [2] or the consistency of the representation [3], in order to favor interpretability and meaningfulness of the models. For instance, alpha-stable distributions have been exploited in audio for their robustness [4]. Anisotropic models are an interesting research direction since they overcome the circular symmetry assumption, while enabling an interpretable parametrization of the statistical moments [5]. Finally, a careful design of the covariance matrix allows for explicitly incorporating time and frequency dependencies [6]. 2. To combine these statistical models withDNNs. This raises several technical difficulties regarding the design of, e.g., the neural architecture, the loss function, and the inference algorithm. The student will exploit and adapt the formalism developed in Bayesian deep learning, notably the variational autoencoding framework [7], as well as the inference procedures developed in DNN-free nongaussian models [8]. 3. To validate experimentally these methods on realistic sound datasets. To that end, the PhD student will use public datasets such as LibriMix (speech) and MUSDB (music), which are reference datasets for source separation and speech enhancement. The PhD student will disseminate his/her research results in international peer-reviewed journals and conferences. In order to promote reproducible research, these publications will be self-archived at each step of the publication lifecycle, and accessible through open access repositories (e.g., arXiv, HAL). The code will be integrated to Asteroid, that is the reference soDware for source separation and speech enhancement developed by Multispeech. Bibliography [1] E. Vincent, M. Jafari, S. Abdallah, M. Plumbley, M. Davies,Probabilistic modeling paradigms for audio source separation, Machine Audition: Principles, Algorithms and Systems, p.162–185, 2010. [2] T. Virtanen, Monaural sound source separation by nonnegative matrix factorization with temporal continuity and sparseness criteria, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 15, no. 3, pp.1066-1074, 2007. [3] J. Le Roux, N. Ono, S. Sagayama, Explicit consistency constraints for STFT spectrograms and their application to phase reconstruction, Proc. SAPA, 2008. [4] S. Leglaive, U. Şimşekli, A. Liutkus, R. Badeau and G. Richard,Alpha-stable multichannel audio source separation, Proc. IEEE ICASSP, 2017. [5] P. Magron, R. Badeau, B. David, Phase-dependent anisotropic Gaussian model for audio source separation, Proc. IEEE ICASSP, 2017. [6] M. Pariente, Implicit and explicit phase modeling in deep learning-based source separation, PhD thesis - Université de Lorraine, 2021. [7] L. Girin, S. Leglaive, X. Bie,J. Diard, T. Hueber, X. Alameda-Pineda,Dynamical variational autoencoders: A comprehensive review, Foundations and Trends in Machine Learning, vol. 15, no. 1-2, 2021. General Information Theme/Domain : Language, Speech and Audio Town/city : Villers lès Nancy Inria Center : CRI Nancy - Grand Est Starting date : 2022-10-01 Duration of contract : 3 years Deadline to apply : 2022-05-02 Contacts Inria Team : MULTISPEECH PhD Supervisor : Magron Paul / paul.magron@inria.fr About Inria Inria is the French national research institute dedicated to digital science and technology. It employs 2,600 people. Its 200 agile project teams, generally run jointly with academic partners, include more than 3,500 scientists and engineers working to meet the challenges of digital technology, oDen at the interface with other disciplines. The Institute also employs numerous talents in over forty different professions. 900 research support staff contribute to the preparation and development of scientific and entrepreneurial projects that have a worldwide impact. The keys to success Upload your complete application data. Applications will be assessed on a rolling basis, thus it is advised to apply as soon as possible. Instruction to apply Defence Security : This position is likely to be situated in a restricted area (ZRR), as defined in Decree No. 2011-1425 relating to the protection of national scientific and technical potential (PPST).Authorisation to enter an area is granted by the director of the unit, following a favourable Ministerial decision, as defined in the decree of 3 July 2012 relating to the PPST. An unfavourable Ministerial decision in respect of a position situated in a ZRR would result in the cancellation of the appointment. Recruitment Policy : As part of its diversity policy, all Inria positions are accessible to people with disabilities. Warning : you must enter your e-mail address in order to save your application to Inria. Applications must be submitted online on the Inria website. Processing of applications sent from other channels is not guaranteed. [8] P. Magron, T. Virtanen, Complex ISNMF: a phase-aware model for monaural audio source separation, IEEE/ACM Transactions on Audio, Speech and Language Processing, Vol. 27, no. 1, pp. 20-31, 2019. Skills Master or engineering degree in computer science, data science, signal processing, or machine learning. Professional capacity in English (spoken, read, and written). Some programming experience in Python andin somedeep learning framework (e.g., PyTorch). Previous experience and/or interest for speech and audio processing is a plus. Benefits package Restauration subventionnée Transports publics remboursés partiellement Congés: 7 semaines de congés annuels + 10 jours de RTT (base temps plein) + possibilité d'autorisations d'absence exceptionnelle (ex : enfants malades, déménagement) Possibilité de télétravail (après 6 mois d'ancienneté) et aménagement du temps de travail Équipements professionnels à disposition (visioconférence, prêts de matériels informatiques, etc.) Prestations sociales, culturelles et sportives (Association de gestion des œuvres sociales d'Inria) Accès à la formation professionnelle Sécurité sociale Remuneration Salary: 1982€ gross/month for 1st and 2 year. 2085€ gross/month for 3rd year. Monthly salary after taxes : around 1594€ for 1st and 2 year. 1677€ for 3rd year

Back  Top

6-22(2022-04-06) Ph.D. Thesis position and Post-doc position at Loria-INRIA, Nancy, France
Ph.D. Thesis position and Post-doc position at Loria-INRIA, Mutlispeech team, Nancy (France)
 
Multimodal automatic hate speech detection
 
https://jobs.inria.fr/public/classic/fr/offres/2022-04660
https://team.inria.fr/multispeech/fr/category/job-offers/
 
Hate speech expresses antisocial behavior. In many countries, online hate 
speech is punishable by the law. Manual analysis of such content and its moderation 
are impossible. An effective solution to this problem would be the automatic 
detection of hate comments. Until now, for hate speech detection, only text 
documents have been used. We would like to advance the knowledge about hate speech 
detection by exploring a new type of document: audio documents.
 
We would like to develop a new methodology to automatically detect hate speech, 
based on Machine Learning and Deep Neural Networks using text and audio.
 
Required Skills: The candidate should have theoretical and a moderate practical experience
with Deep Learning, including a good practice in Python and an understanding of deep
learning libraries like Pytorch. The knowledge of NLP or signal processing will be helpful. 
 
Supervisors:
Irina Illina, Associate Professor, HDR, Lorrain University
Dominique Fohr, Senior Researcher, CNRS
https://members.loria.fr/DFohr/    dominique.fohr@loria.fr
 
 
MULTISPEECH is a joint research team between the Université of Lorraine, Inria, 
and CNRS. It is part of department D4 “Natural language and knowledge processing” 
of LORIA. Its research focuses on speech processing, with particular emphasis to 
multisource (source separation, robust speech recognition), multilingual (computer 
assisted language learning), and multimodal aspects.


Back  Top

6-23(2022-04-07) Postes de traducteur peul-français (H/F), ELDA, Paris, France
Contexte
ELDA (Evaluation and Language resources Distribution Agency, www.elda.org) a pour activités principales la distribution et la production de ressources linguistiques, ainsi que l’évaluation de technologies de la langue (traduction automatique, reconnaissance de la parole, etc.).

Dans le cadre de ses activités de production, ELDA offre plusieurs postes de traducteur peul-français (H/F) à temps plein ou partiel pour la constitution d’un corpus de traduction de la langue.

Mission
Il s’agira de traduire à partir des documents audios et de leurs transcriptions, du peul vers le français afin de fournir les données nécessaires au développement et à l'évaluation de technologies de la langue peul. Le travail sera effectué selon des conventions de traduction sur lesquelles les candidats seront formés.

Profil recherché
• Natif de la langue peul, plus précisément de Mâssina
• Une première expérience en traduction du peul vers le français est souhaitable
• Excellente maîtrise du français écrit (orthographe, grammaire, syntaxe) pour la traduction et la compréhension des conventions de traduction
• Bonne maîtrise de l’outil informatique
• Capacité à intégrer des règles (de traduction) et à les suivre scrupuleusement et avec constance
 
Durée
Temps plein ou mi-temps, pour une durée de 5 mois minimum

Salaire
Salaire mensuel brut: 1925€

Les candidatures (CV, lettre de motivation) doivent être adressées à lucille@elda.org



Back  Top

6-24(2022-04-07) Postes de transcripteur peul-français (H/F), ELDA, Paris, France

Contexte
ELDA (Evaluation and Language resources Distribution Agency, www.elda.org) a pour activités principales la distribution et la production de ressources linguistiques, ainsi que l’évaluation de technologies de la langue (traduction automatique, reconnaissance de la parole, etc.).

Dans le cadre de ses activités de production, ELDA offre plusieurs postes de traducteur peul-français (H/F) à temps plein ou partiel pour la constitution d’un corpus de traduction de la langue.

Mission
Il s’agira de traduire à partir des documents audios et de leurs transcriptions, du peul vers le français afin de fournir les données nécessaires au développement et à l'évaluation de technologies de la langue peul. Le travail sera effectué selon des conventions de traduction sur lesquelles les candidats seront formés.

Profil recherché
• Natif de la langue peul, plus précisément de Mâssina
• Une première expérience en traduction du peul vers le français est souhaitable
• Excellente maîtrise du français écrit (orthographe, grammaire, syntaxe) pour la traduction et la compréhension des conventions de traduction
• Bonne maîtrise de l’outil informatique
• Capacité à intégrer des règles (de traduction) et à les suivre scrupuleusement et avec constance
 
Durée
Temps plein ou mi-temps, pour une durée de 5 mois minimum

Salaire
Salaire mensuel brut: 1604€

Les candidatures (CV, lettre de motivation) doivent être adressées à lucille@elda.org

Back  Top

6-25(2022-04-07) Contrat doctoral au Collegium Musicæ de Sorbonne Université, Paris, France

Le Collegium Musicæ de Sorbonne Université propose un contrat doctoral sur le style vocal :

 Analyse par la Synthèse performative du style vocal - Porteurs de projet : Christophe d'Alessandro  et Céline  Chabot-Canet

Le propos de cette thèse est l’étude du style vocal par le paradigme d’analyse par la synthèse
performative. Ce sujet associe recherche musicologique sur le style vocal et la musicologie de la
performance, recherche musicale sur les instruments chanteurs, et recherche en informatique musicale
sur les nouvelles interfaces pour l’expression musicale et les synthétiseurs vocaux temps-réel.

des détails se trouvent ici (aller sur l'onglet Collegium Musicæ de la page) :

https://www.sorbonne-universite.fr/projets-proposes-en-2022-programme-instituts-et-initiatives

Contact:

Christophe d'Alessandro : christophe.dalessandro@sorbonne-universite.fr

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA