ISCA - International Speech
Communication Association


ISCApad Archive  »  2020  »  ISCApad #263  »  Jobs

ISCApad #263

Friday, May 15, 2020 by Chris Wellekens

6 Jobs
6-1(2019-11-03) Ingénieur de recherche, IRIT, Toulouse France

Dans le cadre du laboratoire commun ALAIA, l'IRIT (équipe SAMoVA https://www.irit.fr/SAMOVA/site/) recrute un ingénieur de recherche en CDD pour intégrer son équipe de recherche, travailler dans le domaine de l'IA appliquée à l'apprentissage des langues étrangères et collaborer avec la société Archean Technologie (http://www.archean.tech/archean-labs-en.html).

Poste à pourvoir : Ingénieur de recherche
Durée: 12 à 18 mois
Prise de poste : possible dès le 1er décembre 2019
Domaine : traitement de la parole, machine learning, analyse automatique de la prononciation 
Lieu : Institut de Recherche en Informatique de Toulouse (Université Paul Sabatier) -  Équipe SAMoVA 
Profil recherché : titulaire d'un doctorat en informatique, machine learning, traitement de l'audio. 
Contact : Isabelle Ferrané (isabelle.ferrane@irit.fr)  
Dossier de candidature : CV, résumé de la thèse, lettre de motivation, recommandations/contacts
Détail de l'offre :  https://www.irit.fr/SAMOVA/site/assets/files/engineer/ALAIA_ResearchEngineerPosition(1).pdf
Salaire : selon expérience 

Back  Top

6-2(2019-11-05) Annotateur/Transcripteur H/F at ZAION, Paris, France

ZAION (https://www.zaion.ai) est une société innovante en pleine expansion spécialisée dans la technologie des robots conversationnels : callbot et chatbot intégrant de l?Intelligence Artificielle.

ZAION a développé une solution qui s?appuie sur une expérience de plus de 20 ans de la Relation Client. Cette solution en rupture technologique reçoit un accueil très favorable au niveau international et nous comptons déjà 12 clients actifs (GENERALI, MNH, APRIL, CROUS, EUROP ASSISTANCE, PRO BTP ?).

Nous sommes actuellement parmi les seuls au monde à proposer une offre de ce type entièrement tournée vers la performance. Nous rejoindre, c?est prendre part à une belle aventure au sein d?une équipe dynamique qui a l?ambition de devenir la référence sur le marché des robots conversationnels.

Au sein de notre activité Intelligence Artificielle, pour appuyer ses innovations constantes concernant l'identification automatique des sentiments et émotions au sein d'interactions conversationnelles téléphoniques, nous recrutons un Annotateur/Transcripteur H/F :

 

Ses missions principales :

  • ANNOTER avec exactitude les échanges entre un client et son conseiller selon des balises expliquées sur un guide,
  • travailler avec minutie à partir de documents audio et texte en français,
  • se familiariser rapidement avec un logiciel d'annotation dédié,
  • connaître les outils de travail collaboratif,
  • utiliser ses connaissances culturelles, langagières et grammaticales pour rendre compte avec une grande précision non seulement de la conversation entre deux interlocuteurs sur un sujet donné mais aussi de la segmentation de leurs propos.

  • Le profil du candidat :
  •  être locuteur natif et avoir une orthographe irréprochable,
  • avoir une très bonne maîtrise des environnements Mac OU Windows OU Linux, - faire preuve de rigueur, d?écoute et de discrétion.

     Contrat en CDD (temps complet), basé à Paris (75017)

    Si intéressé(e), prière de contacter Anne le Gentil/RRH à l?adresse suivante : alegentil@zaion.ai en joignant au mail un C.V
Back  Top

6-3(2019-11-05) Data Scientist /Machine Learning appliqué à l'Audio H/F, at Zaion, Paris, France

ZAION est une société innovante en pleine expansion spécialisée dans la technologie des robots conversationnels : callbot et chatbot intégrant de l?Intelligence Artificielle.

ZAION a développé une solution qui s?appuie sur une expérience de plus de 20 ans de la Relation Client. Cette solution en rupture technologique reçoit un accueil très favorable au niveau international et nous comptons déjà 18 clients actifs (GENERALI, MNH, APRIL, CROUS, EUROP ASSISTANCE, PRO BTP ?).

Nous sommes actuellement parmi les seuls au monde à proposer une offre de ce type entièrement tournée vers la performance. Nous rejoindre, c?est prendre part à une aventure passionnante au sein d?une équipe ambitieuse afin de devenir la référence sur le marché des robots conversationnels.

Nous rejoindre, c?est prendre part à une aventure passionnante et innovante afin de devenir la référence sur le marché des robots conversationnels. Dans le cadre de son développement ZAION recrute son Data Scientist /Machine Learning appliqué à l?Audio H/F. Au sein de l?équipe R&D, votre rôle est stratégique dans le développement et l?expansion de la société. Vous développerez, une solution qui permet de détecter les émotions dans les conversations. Nous souhaitons augmenter les fonctionnalités cognitives de nos callbots afin qu?ils puissent détecter les émotions de leurs interlocuteurs (joie, stress, colère, tristesse?) et donc adapter leurs réponses en conséquence.

Vos missions principales :

- Vous participez à la création du pôle R&D de ZAION et piloterez à votre arrivée votre premier projet de reconnaissance d?émotion dans la voix.

- Construisez, adaptez et faites évoluer nos services de détection d?émotion dans la voix 

- Analysez de bases de données conséquentes de conversations pour en extraire les conversations émotionnellement pertinentes

- Construisez une base de données de conversations labelisées avec des étiquettes émotionnelles

- Formez et évaluez des modèles d'apprentissage automatique pour la classification d?émotion

- Déployez vos modèles en production

- Améliorez en continue le système de détection des émotions dans la voix 

Qualifications requises et expérience antérieure :

-Vous avez une expérience de 2 ans minimum comme Data Scientist/Machine Learning appliqué à l?Audio

- Diplômé d?une école d?Ingénieur ou Master en informatique ou un doctorat en informatique mathématiques avec des compétences solides en traitements de signal (audio de préférence)

- Solide formation théorique en apprentissage machine et dans les domaines mathématiques pertinents (clustering, classification, factorisation matricielle, inférence bayésienne, deep learning...)

- La mise à disposition de modèles d'apprentissage machine dans un environnement de production serait un plus

- Vous maîtrisez un ou plusieurs des langages suivants : Python, Frameworks de machine Learning/Deep Learning (Pytorch, TensorFlow,Sci-kit learn, Keras) et Javascript

- Vous maîtrisez les techniques du traitement du signal audio

- Une expérience confirmée dans la labélisation de grande BDD (audio de préférence) est indispensable ;

- Votre personnalité : Leader, autonome, passionné par votre métier, vous savez animer une équipe en mode projet

- Vous parlez anglais couramment

Merci d?envoyer votre candidature à : alegentil@zaion.ai

 

Très cordialement


Anne le Gentil/RRH

alegentil@zaion.ai/0662339864

 

https://www.linkedin.com/company/zaion-callbot/

Back  Top

6-4(2019-11-25) Offre de stage, INRIA Bordeaux, France

Offre de stage M2 (Informatique/traitement du signal)



Deep Learning pour la classification entre la maladie de Parkinson et l'atrophie multisystématisée par analyse du signal vocal



La maladie de Parkinson (MP) et l'atrophie multisystématisée (AMS) sont des maladies neurodégénératives. AMS appartient au groupe des troubles parkinsoniens atypiques. Dans les premiers stades de la maladie, les symptômes de MP et AMS sont très similaires, surtout pour AMS-P où le syndrome parkinsonien prédomine. Le diagnostic différentiel entre AMS-P et MP peut être très difficile dans les stades précoces de la maladie, tandis que la certitude de diagnostic précoce est importante pour le patient en raison du pronostic divergent. Malgré des efforts récents, aucun marqueur objectif valide n'est actuellement disponible pour guider le clinicien dans ce diagnostic différentiel. La besoin de tels marqueurs est donc très élevé dans la communauté de la neurologie, en particulier compte tenu de la gravité du pronostic AMS.

Il est établi que les troubles de la parole, communément appelés dysarthrie, sont un symptôme précoce commun aux deux maladies et d'origine différente. Nous menons ainsi des recherches qui consistent à utiliser la dysarthrie, grâce à un traitement numérique des enregistrements vocaux des patients, comme un vecteur pour distinguer entre MP et AMS-P. Nous coordonnons actuellement un projet de recherche sur cette thématique avec des partenaires cliniciens, neurologues et ORL, des CHU de Bordeaux et Toulouse. Dans le cadre de ce projet nous disposons d?une base de données d?enregistrements vocaux de patients MP et AMS-P (et de sujets saints).

 

Le but de ce stage est d?explorer des techniques récentes de Deep Leaning pour effectuer la classification entre MP et AMS-P. La première étape du stage consistera en l?implémentation d?un système baseline utilisant des outils standards et en se basant sur la méthodologie décrite dans [1]. Cette dernière traite la classification entre MP et les sujets saints et utilise des «chunks » de spectrogrammes comme entrée à un réseau neuronale convolutionnel (CNN). Cette méthodologie sera appliquée à la tâche MP vs AMS-P en utilisant notre base de données. L?implémentation du CNN se fera avec Keras-Tensorflow (https://www.tensorflow.org/guide/keras). L?extraction des paramètres du signal vocal sera effectuée par Matlab et le logiciel Praat (http://www.fon.hum.uva.nl/praat/). Cette étape permettra au stagiaire d?assimiler les briques de base du Deep Learning et de l?analyse la voix pathologique.

 

La deuxième étape de stage consistera à développer un réseau de neurones profonds (DNN) qui prend en entrée des représentations acoustiques dédiées à la tâche MP vs AMS-P et développés par notre équipe. Il s?agira de :

  • construire le bon jeu de données

  • définir la bonne classe de DNN à utiliser

  • construire la bonne architecture du DNN

  • poser la bonne fonction objective à optimiser

  • analyser et comparer les performances de classification

Cette étape nécessitera une meilleure compréhension des aspects théoriques et algorithmiques du Deep Learning.

 

Pré-requis : Une bonne connaissance des techniques standards en apprentissage statistique (Machine Learning) et de leur conceptualisation est nécessaire. Un bon niveau en programmation Python est aussi nécessaire. Des connaissances en traitement du signal/image et/ou Deep Learning seraient avantageuses. Un test sera effectué pour vérifier ces pré-requis.

 

Responsable du stage : Khalid Daoudi (khalid.daoudi@inria.fr)

Lieu du stage :Équie GeoStat (https://geostat.bordeaux.inria.fr)

INRIA Bordeaux Sud-Ouest (https://www.inria.fr/centre/bordeaux)

Durée du Stage :4 à 6 mois à partir de Février 2020

munération : Gratification standard (~580euros/mois)

 

Le (la) candidat(e) doit envoyer un CV détaillé ainsi que le nom et coordonnées d?au moins une référence à khalid.daoudi@inria.fr.

 

Le stage pourrait déboucher sur une offre de thèse.

 

[1] Convolutional neural network to model articulation impairments in patients with Parkinson?s disease

VJ. C. Vásquez-Correa, J. R. Orozco-Arroyave, and E. Nöth

in Proceedings of INTERSPEECH?2017

Back  Top

6-5(2019-11-15) 13 PhD studentships at UKRI Centre for Doctoral Training (CDT), University of Sheffield, UK

UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies (SLT) and their Applications 

 

Department of Computer Science

Faculty of Engineering 

University of Sheffield

 

Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications

** Apply now for September 2020 intake. Up to 13 studentships available **

Deadline for applications: 31 January 2020. 

What makes the SLT CDT different:

  • Unique Doctor of Philosophy (PhD) with Integrated Postgraduate Diploma (PGDip) in SLT Leadership. 

  • Bespoke cohort-based training programme running over the entire four years providing the necessary skills for academic and industrial leadership in the field, based on elements covering core SLT skills, research software engineering (RSE), ethics, innovation, entrepreneurship, management, and societal responsibility.  

  • The centre is a world-leading hub for training scientists and engineers in SLT ? two core areas within artificial intelligence (AI) which are experiencing unprecedented growth and will continue to do so over the next decade.

  • Setting that fosters interdisciplinary approaches, innovation and engagement with real world users and awareness of the social and ethical consequences of work in this area.

 

The benefits:

  • Four-year fully-funded studentship covering all fees and an enhanced stipend (£17,000 pa)

  • Generous personal allowance for research-related travel, conference attendance, specialist equipment, etc.

  • A full-time PhD with integrated PGDip incorporating 6 months of foundational SLT training prior to starting your research project 

  • Supervision from a team of over 20 internationally leading SLT researchers, covering all core areas of modern SLT research, and a broader pool of over 50 academics in cognate disciplines with interests in SLTs and their application

  • Every PhD project underpinned by a real-world application, directly supported by one of over 30 industry partners. Partners include Google, Amazon, Microsoft, Nuance, NHS Digital and many more

  • A dedicated CDT workspace within a collaborative and inclusive research environment hosted by the Department of Computer Science

  • Work and live in Sheffield - a cultural centre on the edge of the Peak District National Park which is in the top 10 most affordable and safest UK university cities.

 

About you:

We are looking for students from a wide range of backgrounds interested in Speech and Language Technologies. 

  • High-quality (ideally first class) undergraduate or masters (ideally distinction) degree in a relevant discipline. Suitable backgrounds include (but not limited to) computer science, informatics, engineering, linguistics, speech and language processing, mathematics, cognitive science, AI, physics, or a related discipline. 

  • Regardless of background, you must be able to demonstrate mathematical aptitude (minimally to A-Level standard or equivalent) and experience of programming.

  • We particularly encourage applications from groups that are underrepresented in technology.

  • Candidates must satisfy the UKRI funding eligibility criteria. Students must have settled status in the UK and have been ?ordinarily resident? in the UK for at least 3 years prior to the start of the studentship. Full details of eligibility criteria can be found on our website.

 

Applying:

Applications are now sought for the September 2020 intake. Up to 13 studentships available.

 

We operate a staged admissions process, with application deadlines throughout the year. 

The first deadline for applications is 31 January 2020. The second deadline is 31 May 2020. 

Applications will be reviewed within 4 weeks of each deadline and short-listed applicants will be invited to interview. Interviews will be held in Sheffield.

In some cases, because of the high volume of applications we receive, we may need more time to assess your application. If this is the case, we will let you know if we intend to do this.

We may be able to consider applications received after 31 May 2020 if places are still available. Equally, all places may be allocated after the first deadline therefore we encourage you to apply early.

 

See our website for full details and guidance on how to apply: slt-cdt.ac.uk 

For an informal discussion about your application please contact us by email at: sltcdt-enquiries@sheffield.ac.uk

 

By replying to this email or contacting sltcdt-enquiries@sheffield.ac.uk you consent to being contacted by the University of Sheffield in relation to the CDT. You are free to withdraw your permission in writing at any time.

Back  Top

6-6(2019-11-21) Bourses en études françaises (MA et PhD) à l'université Western, Canada

 


Bourses en études françaises (MA et PhD) à l'université Western

 

Le département d’études françaises de l’université Western (London, Canada) accepte maintenant les demandes d’admission pour l’année académique 2020-2021 pour ses programmes de maîtrise et de doctorat, dans les domaines de la linguistique et de la littérature. L’université Western est reconnue comme une des grandes universités de recherche en Ontario et le département d’études françaises participe activement à maintenir sa réputation depuis plus de 50 ans.

 

Le corps professoral et l’ensemble des étudiants et étudiantes participant aux programmes d’études supérieures forment une communauté internationale diversifiée. Nous offrons la possibilité de conduire un programme de recherche en linguistique formelle (syntaxe, morphologie, phonologie et sémantique) de même qu’en sociolinguistique.Nous offrons aussi une formation en littérature dans tous les siècles et tous les domaines de la littérature française et francophone, domaines dans lesquels nos étudiants et étudiantes conduisent leur recherche.

 

Vous pouvez vous renseigner quant aux champs d’intérêt du corps professoral en cliquant ici : https://www.uwo.ca/french/people/faculty/index.html.

 

Vous trouverez la liste des thèses et mémoires complétés depuis 2003 ici : https://www.uwo.ca/french/graduate/thesis/index.html.

 

Date limite pour le premier appel donnant accès au financement à partir de septembre 2020: 1er février 2020

 

Les candidatures canadiennes et internationales retenues pour le programme de doctorat reçoivent une bourse d’études d’une durée de quatre ans couvrant les frais de scolarité ainsi qu’un assistanat d’enseignement annuel d’une valeur minimale de $13 000. Le même financement est offert aux étudiants canadiens acceptés à la maîtrise pour une durée d’une année. Les étudiants internationaux acceptés au programme de maîtrise reçoivent un montant forfaitaire de $3 000 pour toute la durée du programme.

 

En plus des bourses de cycles supérieurs, le département d’études françaises offre aux étudiants et aux étudiantes qui maintiennent un dossier académique de qualité une aide financière pour effectuer des voyages de recherche ou pour prendre part à des colloques, ainsi que la possibilité de remplacer l’assistanat d’enseignement par une bourse de recherche d’une valeur équivalente. Plusieurs étudiants de notre programme de doctorat profitent aussi d’un régime de cotutelle avec une université française.

 

Pour plus d’information concernant l’aide financière offerte par notre institution, veuillez communiquer directement avec le département d’études françaises ou consultez le lien suivant :http://www.uwo.ca/french/graduate/finances/index.html .

 

Nous offrons aussi un excellent programme de formation des assistants d’enseignement de même que plusieurs activités de développement professionnel.

 

 

 

Directeur des cycles supérieures : François Poiré (fpoire@uwo.ca)

Adjointe aux cycles supérieurs : Chrisanthi Ballas (frgrpr@uwo.ca)

Pour nous joindre :http://www.uwo.ca/french/graduate/programs/index.html

 

url de référence

http://www.uwo.ca/french/graduate



Back  Top

6-7(2019-11-22) Master R2 Internship, Loria-Inria, Nancy, France

Master R2 Internship in Natural Language Processing: weakly supervised learning for hate speech detection

Supervisors: Irina Illina, MdC, Dominique Fohr, CR CNRS

Team: Multispeech, LORIA-INRIA

Contact: illina@loria.fr, dominique.fohr@loria.fr

Duration: 5-6 months

Deadline to apply : March 1th, 2020

Required skills: background in statistics, natural language processing and computer program skills (Perl, Python). Candidates should email a detailed CV with diploma

Motivations and context

Recent years have seen a tremendous development of Internet and social networks. Unfortunately, the dark side of this growth is an increase in hate speech. Only a small percentage of people use the Internet for unhealthy activities such as hate speech. However, the impact of this low percentage of users is extremely damaging.

Hate speech is the subject of different national and international legal frameworks. Manual monitoring and moderating the Internet and the social media content to identify and remove hate speech is extremely expensive. This internship aims at designing methods for automatic learning of hate speech detection systems on the Internet and social media data. Despite the studies already published on this subject, the results show that the task remains very difficult (Schmidt et al., 2017; Zhang et al., 2018).

In text classification, text documents are usually represented in some so-called vector space and then assigned to predefined classes through supervised machine learning. Each document is represented as a numerical vector, which is computed from the words of the document. How to numerically represent the terms in an appropriate way is a basic problem in text classification tasks and directly affects the classification accuracy. Developments in Neural Network led to a renewed interest in the field of distributional semantics, more specifically in learning word embeddings (representation of words in a continuous space). Computational efficiency was one big factor which popularized word embeddings. The word embeddings capture syntactic as well as semantic properties of the words (Mikolov et al., 2013). As a result, they outperformed several other word vector representations on different tasks (Baroni et al., 2014).

Our methodology in the hate speech detection is related on the recent approaches for text classification with Neural Networks and word embeddings. In this context, fully connected feed forward networks, Convolutional Neural Networks (CNN) and also Recurrent/Recursive Neural Networks (RNN)  have been applied. On the one hand, the approaches based on CNN and RNN capture rich compositional information, and have outperformed the state-of-the-art results in text classification; on the other hand they are computationally intensive and require huge corpus of training data.

To train these DNN hate speech detection systems it is necessary to have a very large corpus of training data. This training data must contains several thousands of social media comments and each comment should be labeled as hate or not hate. It is easy to automatically collect social media and Internet comments. However, it is time consuming and very costly to label huge corpus. Of course, for several hundreds of comments this work can be manually performed by human annotators. But it is not feasible to perform this work for a huge corpus of comments. In this case weakly supervised learning can be used : the idea is to train a deep neural network with a limited amount of labelled data.

The goal of this master internship is to develop a methodology to weakly supervised learning of a hate speech detection system using social network data (Twitter, YouTube, etc.).

Objectives

In our Multispeech team, we developed a baseline system for automatic hate speech detection. This system is based on fastText and BERT embeddings (Bojanowski  et al., 2017; Devlin et al, 2018) and the methodology of CNN/RNN. During this internship, the master student will work on this system in following directions:

  • Study of the state-of-the-art approaches in the field of weakly supervised learning;
  • Implementation of a baseline method of weakly supervised learning for our system;
  • Development of a new methodology for weakly supervised learning. Two cases will be studied. In the first case, we train the hate speech detection system using a small labeled corpus. Then, we proceed incrementally. We use this first system to label more data, we retrain the system and use it to label new data, In the second case, we refer to learning with noisy labels (labels that can be not correct or given by several annotators who do not agree).

References

Baroni, M., Dinu, G., and Kruszewski, G.  ?Don?t count, predict! a systematic comparison of context-counting vs. contextpredicting semantic vectors?. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, Volume 1, pages 238-247, 2014.

Bojanowski, P., Grave, E., Joulin, A., and Mikolov, T. ?Enriching word vectors with subword information?. Transactions of the Association for Computational Linguistics, 5:135?146, 2017.

Dai, A. M. and Le, Q. V. ?Semi-supervised sequence Learning?. In Cortes, C., Lawrence, N. D., Lee, D. D., Sugiyama, M., and Garnett, R., editors, Advances in Neural Information Processing Systems 28, pages 3061-3069. Curran Associates, Inc, 2015.

Devlin J.,   Chang M.-W., Lee K., Toutanova K. ?BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding?, arXiv:1810.04805v1, 2018.

Mikolov, T., Sutskever, I., Chen, K., Corrado, G. S., and Dean, J. ?Distributed representations of words and phrases and their Compositionality?. In Advances in Neural Information Processing Systems, 26, pages 3111-3119. Curran Associates, Inc, 2013b.

Schmidt A., Wiegand M. ?A Survey on Hate Speech Detection using Natural Language Processing?, Workshop on Natural Language Processing for Social Media, 2017.

Zhang, Z., Luo, L. ?Hate speech detection: a solved problem? The Challenging Case of Long Tail on Twitter?. arxiv.org/pdf/1803.03662, 2018.

 

Back  Top

6-8(2019-11-25) Annotateur/Transcripteur, ZAION, Paris, France

ZAION (https://www.zaion.ai) est une société innovante en pleine expansion spécialisée dans la technologie des robots conversationnels : callbot et chatbot intégrant de l?Intelligence Artificielle.

ZAION a développé une solution qui s?appuie sur une expérience de plus de 20 ans de la Relation Client. Cette solution en rupture technologique reçoit un accueil très favorable au niveau international et nous comptons déjà 12 clients actifs (GENERALI, MNH, APRIL, CROUS, EUROP ASSISTANCE, PRO BTP ?).

Nous sommes actuellement parmi les seuls au monde à proposer une offre de ce type entièrement tournée vers la performance. Nous rejoindre, c?est prendre part à une belle aventure au sein d?une équipe dynamique qui a l?ambition de devenir la référence sur le marché des robots conversationnels.

Au sein de notre activité Intelligence Artificielle, pour appuyer ses innovations constantes concernant l'identification automatique des sentiments et émotions au sein d'interactions conversationnelles téléphoniques, nous recrutons un Annotateur/Transcripteur H/F :

 

Ses missions principales :

  • ANNOTER avec exactitude les échanges entre un client et son conseiller selon des balises expliquées sur un guide,
  • travailler avec minutie à partir de documents audio et texte en français,
  • se familiariser rapidement avec un logiciel d'annotation dédié,
  • connaître les outils de travail collaboratif,
  • utiliser ses connaissances culturelles, langagières et grammaticales pour rendre compte avec une grande précision non seulement de la conversation entre deux interlocuteurs sur un sujet donné mais aussi de la segmentation de leurs propos.

     

    Le profil du candidat :
  •  être locuteur natif et avoir une orthographe irréprochable,
  • avoir une très bonne maîtrise des environnements Mac OU Windows OU Linux, - faire preuve de rigueur, d?écoute et de discrétion.  

     Contrat en CDD (temps complet ou partiel), basé à Paris (75017)

     Si intéressé(e), prière de contacter Anne le Gentil/RRH à l?adresse suivante : alegentil@zaion.ai en joignant au mail un C.V
Back  Top

6-9(2019-12-02) 2 postes d'enseignant-chercheur, Université Paris-Saclay, France

2 postes d'enseignant-chercheur (un PR et un MC) vont être mis au concours par
l'Université Paris-Saclay en section 27 lors du concours de 2020, avec des profils en
Traitement des Langues, dont la Parole en priorité et une recherche qui se fera au LIMSI.

Les deux profils sont détaillés ici :

https://www.limsi.fr/fr/limsi-emplois/offres-de-postes-chercheurs-et-enseignants-chercheurs N'hésitez pas à prendre contact si l'un des postes vous intéresse (dir@limsi.fr), et à faire savoir autour de vous l'existence de ces
postes.

Back  Top

6-10(2019-12-03) Ph studentships, University of Glasgow, UK

The School of Computing Science at the University of Glasgow is offering studentships and excellence bursaries for PhD study. The following sources of funding are available:

 

* EPSRC DTA awards: open to UK or EU applicants who have lived in the UK for at least 3 years (see https://epsrc.ukri.org/skills/students/help/eligibility/) - covers fees and living expenses

* College of Science and Engineering Scholarship: open to all applicants (UK, EU and International) - covers fees and living expenses

* Centre for Doctoral Training in Socially Intelligent Artificial Agents: open to UK or EU applicants who have lived in the UK for at least 3 years through a national competition – see https://socialcdt.org

* China Scholarship Council Scholarship nominations: open to Chinese applicants – covers fees and living expenses

* Excellence Bursaries: full fee discount for UK/EU applicants; partial discount for international applicants

* Further scholarships (contact potential supervisor for details): open to UK or EU applicants

 

Whilst the above funding is open to students in all areas of computing science, applications in the area of Human-Computer Interaction are welcomed. 

 

Please find below a list of Available supervisors in HCI and their research areas.

 

Available supervisors and their research topics:  

* Prof Stephen Brewster (http://mig.dcs.gla.ac.uk/): Multimodal Interaction, MR/AR/VR, Haptic feedback. Email: Stephen.Brewster@glasgow.ac.uk

* Prof Matthew Chalmers (https://www.gla.ac.uk/schools/computing/staff/matthewchalmers/): mobile and ubiquitous computing, focusing on ethical systems design and healthcare applications. Email: Matthew.Chalmers@glasgow.ac.uk

* Prof Alessandro Vinciarelli (http://www.dcs.gla.ac.uk/vincia/): Social Signal Processing. Email: Alessandro.Vinciarelli@glasgow.ac.uk
* Dr Mary Ellen Foster (http://www.dcs.gla.ac.uk/~mefoster/): Social Robotics, Conversational Interaction, Natural Language Generation. Email: MaryEllen.Foster@glasgow.ac.uk
* Dr Euan Freeman (http://euanfreeman.co.uk/): Interaction Techniques, Haptics, Gestures, Pervasive Displays. Email: Euan.Freeman@glasgow.ac.uk

* Dr Fani Deligianni (http://fdeligianni.site/): Characterising uncertainty, eye-tracking, EEG, bimanual teleoperations. Email: fadelgr@gmail.com

* Dr Helen C. Purchase (http://www.dcs.gla.ac.uk/~hcp/): Visual Communication, Information Visualisation, Visual Aesthetics. Email: Helen.Purchase@glasgow.ac.uk

* Dr Mohamed Khamis (http://mkhamis.com/): Human-centered Security and Privacy, Eye Tracking and Gaze-based Interaction, Interactive Displays. Email: Mohamed.Khamis@glasgow.ac.uk

 

The closing date for applications is 31 January 2020.  For more information about how to apply, see https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents.  This web page includes information about the research proposal, which is required as part of your application.

 

Applicants are strongly encouraged to contact a potential supervisor and discuss an application before the submission deadline.

 

Back  Top

6-11(2019-12-03) Poste de chercheur au LIMSI, Orsay, Paris, France

Le LIMSI recrute un chercheur (CC) en traitement automatique des langues et traduction
automatique (H/F). Tous les détails de l'offre sont ici:

https://emploi.cnrs.fr/Offres/CDD/UPR3251-FRAYVO-002/Default.aspx

Back  Top

6-12(2019-12-06) Stage de fin d’études d’Ingénieur ou de Master 2, INA, Bry-sur-Marne, France

Segmentation et détection automatique

des situations conflictuelles en interview politique


Stage de fin d’études d’Ingénieur ou de Master 2 – Année académique 2019-2020

Mots clés : Machine Learning, Diarization, Humanités numériques, parole politique, expressivité

Contexte

L’Institut national de l’audiovisuel (INA) est un établissement public à caractère industriel et

commercial (EPIC), dont la mission principale consiste à archiver et valoriser la mémoire

audiovisuelle française (radio, télévision et web média). L’INA assure également des missions de

recherche scientifique, de formation et de production.

Ce stage s’inscrit le cadre du projet OOPAIP (Ontologie et outil pour l’annotation des interventions

politiques). C’est un projet transdisciplinaire porté par l’INA et le CESSP (Centre européen de

sociologie et de science politique) de l’Université Paris 1 Panthéon-Sorbonne. L’objectif est de

concevoir de nouvelles approches pour élaborer des analyses détaillées, qualitatives et

quantitatives des interventions politiques médiatisés en France. Une part du projet porte sur

l’étude de la dynamique des interactions conflictuelles dans les interviews et débats politiques, ce

qui nécessite une description fine et un large corpus afin de généraliser les modèles. Les verrous

technologiques concernent la performance des algorithmes de segmentation en locuteurs et en

styles de parole. L’amélioration de leur précision, l’ajout de la détection de parole superposée, de

mesures de l’effort vocal et d’éléments expressifs, permettront d’optimiser le travail d’annotation

manuel.

Objectifs du stage

Le stage vise principalement à l’amélioration de la segmentation automatique d’interviews

politiques pour assister les travaux de recherche en science politique. La thématique de recherche

correspondante que nous retiendrons est la mise en évidence des situations conflictuelles. Dans ce

cadre, nous nous intéresserons notamment à la détection du brouhaha (parole superposée). De

manière plus fine, nous aimerions pouvoir extraire des descripteurs du signal de parole corrélés au

niveau de conflictualité des échanges, basés, par exemple, sur le niveau d’activation (niveau

intermédiaire entre le signal et l’expressivité [Rilliard et al, 2018]) ou l’effort vocal [Liénard, 2019].

Le stage pourra s’appuyer initialement sur deux corpus totalisant 30 interviews politiques annotés

finement en tours de paroles — dans le cadre du projet OOPAIP. Il débutera par la réalisation d’un

état de l’art de la diarization (segmentation et regroupement en locuteurs [Broux et al., 2019]) et

de la détection de la parole superposée [Chowdhury et al, 2019]. Il s’agira ensuite de proposer des

solutions basées sur des frameworks récents pour améliorer la localisation des frontières de tours

de parole, notamment lorsque la fréquence des changements de locuteurs est importante — le

cas limite étant la situation du brouhaha.

La seconde partie du stage se penchera sur une mesure plus fine du niveau conflictuel des

échanges, via la recherche des descripteurs les plus pertinents et par la mise au point

d’architecture d’apprentissage pour sa modélisation.

Le langage de programmation utilisé dans le cadre de ce stage sera Python. Le stagiaire aura accès

aux ressources de calcul de l’INA (serveurs et clusters), ainsi qu’à un desktop performant avec 2

GPU de génération récente.

Valorisation du stage

Différentes stratégies de valorisation des travaux du·de la stagiaire seront envisagées, en fonction

du degré de maturité des travaux réalisés :

Diffusion des outils d’analyse réalisés sous licence open-source via le dépôt GitHub de

l’INA : https://github.com/ina-foss

Rédaction de publications scientifiques

Conditions du stage

Le stage se déroulera sur une période de 4 à 6 mois, au sein du service de la Recherche de l’Ina. Il

aura lieu sur le site Bry 2, situé au 18 Avenue des frères Lumière, 94360 Bry-sur-Marne. La·le

stagiaire sera encadré·e par Marc Evrard (mevrard@ina.fr).

Gratification : environ 550 Euros par mois.

Profil recherché

Étudiant·e en dernière année d’un bac +5 dans le domaine de l’informatique et de l'IA.

Compétence en langage Python et expérience dans l’utilisation de bibliothèques de ML

(Scikit-learn, TensorFlow, PyTorch).

Vif intérêt dans les SHS, les humanités numériques et les sciences politiques en particulier.

Capacité à réaliser une étude bibliographique à partir d’articles scientifiques rédigés en

anglais.

Bibliographie

Broux, P. A., Desnous, F., Larcher, A., Petitrenaud, S., Carrive, J., & Meignier, S. (2018). “S4D: Speaker Diarization

Toolkit in Python”. In Inter-speech 2018.

Chowdhury, S. A., Stepanov, E. A., Danieli, M., Riccardi, G. (2019). “Automatic classification of speech overlaps:

Feature representation and algo-rithms”, Computer Speech & Language, vol. 55, pp.145-167.

Liénard, J.-S. “Quantifying vocal effort from the shape of the one-third octave long-term-average spectrum of speech”

J. Acoust. Soc. Am. 146 (4), Oc-tober 2019.

Rilliard, A., d’Alessandro, C & Evrard, M. (2018). Paradigmatic variation of vowels in expressive speech: Acoustic

description and dimensional analysis. The Journal of the Acoustical Society of America, 143(1), 109–122.

Back  Top

6-13(2019-12-07) Stage à l'IRCAM, Paris, France

Deep Disentanglement of Speaker Identity and Phonetic Content for Voice

Conversion

Dates : 01/02/2020 au 30/06/2020

Laboratoire : STMS Lab (IRCAM / CNRS / Sorbonne Université)

Lieu : IRCAM – Analyse et Synthèse des Sons

Responsables : Nicolas Obin, Axel Roebel

Contact : Nicolas.Obin@ircam.fr, Axel.Roebel@ircam.fr

Contexte :

La conversion de l’identité de la voix consiste à modifier les caractéristiques d’une voix

« source » pour reproduire les caractéristiques d’une voix « cible » à imiter, à partir

d’une collection d’exemples de la voix « cible ». La tâche de conversion d’identité de la

voix s’est largement popularisée ces dernières années avec l’apparition des « deep

fakes », avec comme objectif de transposer les réussites réalisées dans le domaine de

l’image au domaine de la parole. Ainsi, les lignes de recherche actuelles reposent sur des

architectures neuronales comme les modèles séquence-à-séquence, les réseaux

antagonistes génératifs (GAN, [Goodfellow et al., 2014]) et ses variantes pour

l’apprentissage à partir de données non appareillées (Cycle-GAN [Kaneko and

Kamaeoka, 2017] ou AttGAN [He et al., 2019]). Les challenges majeurs de la conversion

d’identité comprennent la possibilité d’apprendre des transformation d’identité

efficacement à partir de petites bases de données (qq minutes) et de séparer les

facteurs de variabilité de la parole afin de modifier uniquement l’identité d’un locuteur

sans modifier ou dégrader le contenu linguistique et expressif de la voix.

Objectifs :

Le travail effectué dans ce stage concernera l’extension du système de conversion

neuronal de l’identité vocale actuellement développée dans le cadre du projet ANR

TheVoice (https://www.ircam.fr/projects/pages/thevoice/). Le focus principal du

stage sera d’intégrer efficacement l’information du contenu linguistique au système de

conversion neuronal existant. Cet objectif passera par la réalisation des tâches

suivantes :

- Développement d’une représentation de l’information phonétique (par ex. sous

forme de Phonetic PosteriorGrams [Sun et al., 2016]) et intégration au système de

conversion actuel.

- Application et approfondissement de techniques de « disentanglement » de l’identité

du locuteur et du contenu phonétique pour l’apprentissage de la conversion

[Mathieu et al., 2016 ; Hamidreza et al., 2019]

- Evaluation des résultats obtenus par comparaison à des systèmes de conversion de

l’état de l’art, sur des bases de référence comme VCC2018 ou LibriSpeech.

Les problèmes abordés pendant le stage seront sélectionnés en début du stage après une

phase d’orientation et une étude bibliographique. Les solutions réalisées au cours du

stage seront intégrées au système de conversion d’identité de la voix de l’Ircam, avec

possibilité d’exploitation industrielle et professionnelle. Par exemple, le système de

conversion d’identité développé à l’Ircam a été exploité dans des projets de production

professionnelle pour recréer des voix de personnalités historiques : le maréchal Pétain

dans le documentaire « Juger Pétain » en 2012, et Louis de Funès dans le film « Pourquoi

j’ai pas mangé mon père » de Jamel Debbouze en 2015.

Le stage s’appuiera sur les connaissances de l’équipe Analyse et Synthèse des Sons du

laboratoire STMS (IRCAM/CNRS/Sorbonne Université) en traitement du signal de parole

et en apprentissage de réseaux de neurones, et sur une grande expérience en

conversion d’identité de la voix [Villavicencio et al., 2009 ; Huber, 2015].

Compétences attendues :

- Maîtrise de l’apprentissage automatique, en particulier de l’apprentissage par

réseaux de neurones ;

- Maîtrise du traitement du signal audio numérique (analyse temps-fréquence, analyse

paramétrique de signaux audio, etc…) ;

- Bonne maîtrise de la programmation Python et de l’environnement TensorFlow ;

- Autonomie, travail en équipe, productivité, rigueur et méthodologie.

Rémunération :

Gratification selon loi en vigueur et avantages sociaux

Date limite de candidature :

20/12/2019

Bibliographie :

[Goodfellow et al., 2014] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David

Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio, “Generative Adversarial

Networks,” arXiv:1406.2661 [cs, stat], 2014.

[Hamidreza et al., 2019] Seyed Hamidreza Mohammadi, Taehwan Kim. One-shot Voice

Conversion with Disentangled Representations by Leveraging Phonetic Posteriorgrams,

Interspeech 2019.

[He et al., 2019] Z. He, W. Zuo, M. Kan, S. Shan, and X. Chen, “Attgan: Facial attribute editing by

only changing what you want.,” IEEE Transactions on Image Processing, vol. 28, no. 11, 2019.

[Huber 2015] S. Huber, “Voice Conversion by modelling and transformation of extended voice

characteristics”, Thèse Université Pierre et Marie Curie (Paris VI), 2015.

[Kanekoa and Kameoka, 2017] TakuhiroKanekoandHirokazuKameoka,“Parallel-Data-Free Voice

Conversion Using Cycle-Consistent Adversarial Net- works,” arXiv:1711.11293 [cs, eess, stat],

2017

[Mathieu et al., 2016] Michael Mathieu, Junbo Zhao, Pablo Sprechmann, Aditya Ramesh, Yann

LeCun. Disentangling factors of variation in deep representations using adversarial training,

NIPS 2016.

[Sun et al., 2016 ]Lifa Sun, Kun Li, Hao Wang, Shiyin Kang, and Helen Meng, “Phonetic

posteriorgrams for many-to-one voice conversion without parallel data training,” in 2016 IEEE

International Conference on Multimedia and Expo (ICME), 2016, pp. 1–6.

[Villavicencio et al., 2009] Villavicencio, F., Röbel, A., and Rodet, X. (2009). Applying improved

spectral modelling for high quality voice conversion. In Proc. of IEEE International Conference on

Acoustics, Speech, and Signal Processing (ICASSP), pages 4285–4288. 17, 41, 45

Back  Top

6-14(2019-12-07) Assistant-e ingénieur-e en production, LPL, Aix en Provence, France

Emploi-type :

Assistant-e ingénieur-e en production, traitement de données et enquêtes BAP D (Donnée en SHS) :

Mission :

Au sein de plateforme expérimentale du Laboratoire Parole et Langage (LPL), l'agent sera chargé de la coordination technique, de l'accueil et du soutien aux expériences en collaboration avec les responsables de secteur (audio-vidéo, articulographie/physiologie, neurophysiologie/eye-tracking).

Activités :

Accueillir et recueillir des informations personnelles relatives aux participants dans le respect de la législation en vigueur (RGPD)
Assurer le recrutement des participants aux expériences
Interfacer avec les chercheurs extérieurs
Suivre et renouveler les consommables
Assurer la réservation des espaces expérimentaux et des matériels, établissement du planning de passation, prises de rendez-vous
Soutenir la mise en place du dispositif expérimental en lien avec le responsable de secteur
Renseignement des cahiers de laboratoire
Assurer les Campagnes permanentes pour la recherche de volontaires
Participer à la rédaction de notices méthodologiques des opérations réalisées
Actualiser ses connaissances disciplinaires et méthodologiques et répertorier la bibliographie consacrée à un champ d'études

Compétences :

Maîtrise des techniques, méthodes, et protocoles expérimentaux en SHS.
Connaissance dans le domaine de la mesure et des statistiques
Travail en collaboration avec les chercheurs dans la conception, la mise en place et la réalisation des expériences
Travail en équipe avec les autres personnels ITA intervenant sur la plateforme.
Sens aigu des relations humaines dans ses interactions avec des investigateurs aux compétences variées (des étudiants de master aux chercheurs étrangers en passant par les chercheurs et doctorants du laboratoire) et avec toutes les catégories de participants, des enfants d'âge scolaire aux adultes et personnes âgées, et dont certains peuvent présenter différentes pathologies.
Connaissance et respect de la législation dans le domaine des recherches sur la personne humaine ainsi que les règles en matière d'hygiène et de sécurité.
Bonne maîtrise de l'anglais parlé (Niveau B2 selon le cadre européen de référence pour les langues) se montrera indispensable
Archivage pérenne de données de recherche (notion)

 

La campagne est ouverte jusqu'au 17 janvier mais l'examen des candidatures se fera au fil de l'eau. N'hésitez pas à diffuser cette information auprès des personnes potentiellement concernées.

 

 

 

 

Back  Top

6-15(2019-12-07) 1 year post-doc/engineer position at LIA, Avignon France

 1 year post-doc/engineer position at LIA, Avignon France, in the Vocal Interaction Group

Multimodal man-robot interface for social spaces

keywords: AI, ML, DNN, RL, NLP, dialogue, vision, robotics

Starting job date (desired): March 2020.
==================================================================
## Work description

###Project Summary

Automation and optimisation of *verbal interactions of a socially-competent robot*,
guided by its *multimodal perceptions*

Facing a steady increase in the ageing population and the prevalence of chronic diseases,
social robots are promising tools to include in the health care system. Yet extant
assistive robots are not well suited to such context as their communication abilities
cannot handle social spaces (several meters and group of persons) but rather face-to-face
individual interactions in quiet environments. In order to overcome these limitations and
eventually aiming at natural man-robot interaction, the objectives of the work will be
multifold.

First and foremost we intend to leverage the rich information available with audio and
visual flows of data coming from humans to extract verbal and non-verbal features. These
features will be used to enhance the robot's decision-making ability such that it can
smoothly take speech turns and switch from interaction with a group of people to
face-to-face dialogue and back. Secondly online and continual learning of the advanced
system will be investigated.

Outcomes of the project will be implemented onto a commercially available social robot
(most likely a Pepper) and validated with several in-situ use cases. A large-scale data
collection will complement in-situ tests to fuel further researches. Essential
competencies to address our overall objectives lie in dialogue systems / NLP, yet
knowledges in vision and robotics would also be necessary. And in any case good command
of deep learning techniques and tools is mandatory (including reinforcement learning for
dialogue strategy training).

### Requirements

- Master or PhD in Computer Science, Machine Learning, Computational Linguistics,
Mathematics, Engineering or related fields
- Expertise in NLP / Dialog systems. Strong knowledge of current NLP / Interactive /
Speech techniques is expected. Previous experience with dialogue and interaction and/or
vision data is a strong plus.
- Knowledge in Vision and/or Robotics are plusses.
? Strong programming skills, Python/C++ programmer of DNN models (preferably with pytorch)
- Expertise in Unix environments
- Good spoken and written command of English is required. *French is optional.*
- Good writing skills, as evidenced through publications at top venues (e.g., ACL, EMNLP,
SigDial etc) is a plus, for post-doc.

## Place

Bordered by the left bank of the Rhône Avignon is one of the most beautiful city in
Provence, for some time capital of Christendom in the Middle Ages. The important remains
of a past rich in history give the city its unique atmosphere: dozens of churches and
chapels, the ?Palais des Papes? (palace of the popes) the most important gothic palace in
Europe), the Saint-Benezet brigde, called the « pont d?Avignon » of worldwide fame
through its commemoration by the song, and the ramparts that still encircle the entire
city, ten museums from then ancient times to contemporary art.

The 94,787 inhabitants of the city, about 12,000 live in the ancient town centre
surrounded by its medieval ramparts. Avignon is not only the birthplace of the most
prestigious festival of contemporary theatre, European Capital of Culture in 2000, but
also the largest city and capital of the département of Vaucluse. The region offers a
high quality of urban life at comparatively still modest costs. In addition to this, the
region of Avignon also offers the opportunity to visit numerous monuments and natural
beauty sites easily accessible in a very short time: Avignon is the ideal destination for
visiting Provence.

LIA is the computer science lab of Avignon University: http://lia.univ-avignon.fr.

## Conditions

Net monthly salary: 1500-2100 ? (depending on the candidate's experience). Basic
healthcare coverage included (https://en.wikipedia.org/wiki/Health_care_in_France).

The position carries no direct teaching load, but if desired, teaching BSc or MSc level
courses is a possibility (paid extra hours), as is supervision of student dissertation
projects.

Initial employment is 12 months, extension is possible. For engineer, shift to a PhD
position is possible.

## Applications

No deadline: applications are possible until the position is filled.

To apply, send the following documents *as a single PDF* to
fabrice.lefevre@univ-avignon.fr:

* Statement of research interests that motivates your application
* CV, including the list of publications if any
* Scans of transcripts and academic degree certificates
* MSc/PhD dissertation and/or any other writing samples
* Coding samples or links to your contributions to public code repositories, if any
* Names, affiliations, and contact details of up to three people who can provide
reference letters for you

Delete | Reply | Reply to List | Forward | Redirect | View Thread | Blacklist | Whitelist | Message Source | Save as | Print
M
Back  Top

6-16(2019-12-05) ​Postdoctoral Fellowship,University of Connecticut Health,Farmington, CT, USA

Postdoctoral Fellowship, Speech Processing in Noise

University of Connecticut Health

Location: Farmington, CT

Start Date: January 2020, or thereafter

Duration: Initially 1 year with potential for extension

Salary: Depends on experience, based on NIH range: benefits include health care, retirement

contributions, and paid leave for vacation, personal days, holidays and sickness.

Application Process: Please send your resumé, a one-page cover letter that describes your

research interests and experience, a list of publications (copies of most relevant - optional), and

contact information for three references to Dr Insoo Kim (ikim@uchc.edu).

A Postdoctoral Fellowship is available in the Division of Occupational Medicine, Department of

Medicine, at the University of Connecticut Health to investigate algorithms for improving speech

intelligibility in environmental noise. The work will involve simulating the noise of machines from

known frequency spectra and creating speech-in-noise test files using MATLAB for replaying to

subjects in listening tests. The test files may be processed electronically to improve intelligibility

before the psychoacoustic testing. The position requires knowledge of, and practical

experience with, speech or audio digital signal processing; proficiency with MATLAB and

Simulink simulations, and; familiarity with psychoacoustic testing of speech intelligibility in noise,

and with the development of embedded systems or digital signal processors.

The Fellow will participate in on-going research projects involving speech processing. He/she

will be responsible for implementing the algorithms for improving speech communication in

noise, conducting all psychoacoustic tests used to establish proof-of-concept, and data analysis

and interpretation. The Fellow will also have opportunities to supervise graduate and

undergraduate students.

Candidates should have good oral and written English communication skills, be capable of

independent work as a part of a multi-disciplinary team, be able to work on multiple projects at

the same time, publish results in academic journals and participate in grant proposal

preparation. They should have a Ph.D. degree in Acoustics, Electrical, Computer, Biomedical

Engineering, or a related field with appropriate experience. The initial appointment is for a

period of one year with potential for further extension. The review of applications will start

immediately and will continue until the position is filled.

Back  Top

6-17(2019-12-08) PhD sudentship, Utrecht University, The Netherlands

The Social and Affective Computing group at the Utrecht University Department of Information and Computing Sciences is looking for a PhD candidate to conduct research on explainable and accountable affective computing for mental healthcare scenarios. The five-year position includes 70% research time and 30% teaching time. The post presents an excellent opportunity to develop an academic profile as a competent researcher and able teacher.

Affective computing has great potential for clinician support systems, but it needs to produce insightful, explainable, and accountable results. Cross-corpus and cross-task generalization of approaches, as well as efficient and effective ways of leveraging multimodality are some of the main challenges in the field. Furthermore, data are scarce, and class-imbalance is expected. While addressing these issues, precision needs to be complemented by interpretability. Potential investigation areas include for example depression, bipolar disorder, and dementia.

The PhD candidate is expected to bridge the research efforts in cross-corpus, cross-task multimodal affect recognition with explainable/accountable machine learning for the aim of efficient, effective and interpretable predictions on a data-scarce and sensitive target problem. The candidate is also expected to be involved in teaching activities within the department of Information and Computing Sciences. Teaching activities may include supporting senior teaching staff, conducting tutorials, and supervising student projects and theses. These activities will contribute to the development of the candidate's didactic skills.

We are looking for candidates with:

  • a Master?s degree in computer science/engineering, mathematics, and/or fields related to the project focus;
  • interest or experience with processing of audio/acoustics, vision/video or natural language;
  • interest or experience with machine learning, affective computing, information fusion, multimodal interaction;
  • demonstrable coding skills in high-level scripting languages such as MATLAB, python or R;
  • excellent English oral and writing skills.

The ideal candidate should express a strong interest in research in affective computing and teaching within the ICS department. The Department finds gender balance specifically and diversity in a broader sense very important; therefore women are especially encouraged to apply. Applicants are encouraged to mention any personal circumstances that need to be taken into account in their evaluation, for example parental leave or military service.

 

We offer an exciting opportunity to contribute to an ambitious and international education programme with highly motivated students and to conduct your own research project at a renowned research university. You will receive appropriate training, personal supervision, and guidance for both your research and teaching tasks, which will provide an excellent start to an academic career.

The candidate is offered a position for five years (1.0 FTE). The gross salary starts at ?2,325 and increases to ?2,972 (scale P according to the Collective Labour Agreement Dutch Universities) per month for a full-time employment. Salaries are supplemented with a holiday bonus of 8% and a year-end bonus of 8.3% per year. In addition, Utrecht University offers excellent secondary conditions, including an attractive retirement scheme, (partly paid) parental leave and flexible employment conditions (multiple choice model). More information about working at Utrecht University can be found here.

Application deadline is 01.01.2020.

 Further information and application procedure can be found here.
 
 

Back  Top

6-18(2019-12-09) Postdoc , IRISA, Rennes, France
IRISA (France) is looking for a 30-month postdoctoral researcher for topic of Natural Language Processing for Kids, starting in Spring 2020.

 

Back  Top

6-19(2019-12-15) PhD grant at the University of Glasgow, Scotland, UK

The School of Computing Science at the University of Glasgow is offering studentships and excellence bursaries for PhD study. The following sources of funding are available:

* EPSRC DTA awards: open to UK or EU applicants who have lived in the UK for at least 3 years (see https://epsrc.ukri.org/skills/students/help/eligibility/) - covers fees and living expenses
* College of Science and Engineering Scholarship: open to all applicants (UK, EU and International) - covers fees and living expenses
* Centre for Doctoral Training in Socially Intelligent Artificial Agents: open to UK or EU applicants who have lived in the UK for at least 3 years through a national competition – see https://socialcdt.org
* China Scholarship Council Scholarship nominations: open to Chinese applicants – covers fees and living expenses
* Excellence Bursaries: full fee discount for UK/EU applicants; partial discount for international applicants
* Further scholarships (contact potential supervisor for details): open to UK or EU applicants

Whilst the above funding is open to students in all areas of computing science, applications in the area of Human-Computer Interaction are welcomed. 

Please find below a list of Available supervisors in HCI and their research areas.

Available supervisors and their research topics:  

* Prof Stephen Brewster (http://mig.dcs.gla.ac.uk/): Multimodal Interaction, MR/AR/VR, Haptic feedback. Email: Stephen.Brewster@glasgow.ac.uk
* Prof Matthew Chalmers (https://www.gla.ac.uk/schools/computing/staff/matthewchalmers/): mobile and ubiquitous computing, focusing on ethical systems design and healthcare applications. Email: Matthew.Chalmers@glasgow.ac.uk
* Prof Alessandro Vinciarelli (http://www.dcs.gla.ac.uk/vincia/): Social Signal Processing. Email: Alessandro.Vinciarelli@glasgow.ac.uk
* Dr Mary Ellen Foster (http://www.dcs.gla.ac.uk/~mefoster/): Social Robotics, Conversational Interaction, Natural Language Generation. Email: MaryEllen.Foster@glasgow.ac.uk
* Dr Euan Freeman (http://euanfreeman.co.uk/): Interaction Techniques, Haptics, Gestures, Pervasive Displays. Email: Euan.Freeman@glasgow.ac.uk
* Dr Fani Deligianni (http://fdeligianni.site/): Characterising uncertainty, eye-tracking, EEG, bimanual teleoperations. Email: fadelgr@gmail.com
* Dr Helen C. Purchase (http://www.dcs.gla.ac.uk/~hcp/): Visual Communication, Information Visualisation, Visual Aesthetics. Email: Helen.Purchase@glasgow.ac.uk
* Dr John Williamson (https://www.johnhw.com/): Probabilistic user interfaces, Bayesian interaction, motion correlation interfaces, rich and robust human sensing systems. Email: johnh.williamson@glasgow.ac.uk
* Dr Mohamed Khamis (http://mkhamis.com/): Human-centered Security and Privacy, Eye Tracking and Gaze-based Interaction, Interactive Displays. Email: Mohamed.Khamis@glasgow.ac.uk

The closing date for applications is 31 January 2020.  For more information about how to apply, see https://www.gla.ac.uk/schools/computing/postgraduateresearch/prospectivestudents.  This web page includes information about the research proposal, which is required as part of your application.

Applicants are strongly encouraged to contact a potential supervisor and discuss an application before the submission deadline.

Best regards,
Mohamed Khamis

--
Dr. Mohamed Khamis
Lecturer of Human-centered Security
School of Computing Science
University of Glasgow
Glasgow, G12 8RZ, UK

Tel +44 (0) 141 330 8078
Mohamed.Khamis@glasgow.ac.uk
https://www.gla.ac.uk/schools/computing/staff/mohamedkhamis/
http://mkhamis.com/

Back  Top

6-20(2019-12-19) Postdoc at Bielefeld University, Germany

The Faculty of Linguistics and Literary Studies at Bielefeld University offers a full-time

 

research position (postdoctoral position, E13 TV-L, non-permanent) in phonetics

 

The Faculty of Linguistics and Literary Studies offers a full time post-doctoral position in phonetics for 3 years (German pay scale: E13). 

The Bielefeld phonetics group is well known for its research on phenomena in spontaneous interaction, prosody, multimodal speech and spoken human-machine interaction. Bielefeld campus offers a wide range of options for intra and interdisciplinary networking, and further qualification.

 

Your responsibilities:

- conduct independent research in phonetics, with a visible focus on modeling or speech technology (65%).

- teach 2 classes (3 hours=4 teaching units/week) per semester in the degree  offered by the linguistics department, including the supervision of BA and MA theses and conducting exams (25%).

- organizational tasks that are part of the self-administration of the university (10%).

 

Necessary qualifications:

 

- a Masters degree in a relevant discipline (e.g., phonetics, linguistics, computer science, computational linguistics)

- a doctoral degree in a relevant discipline

- a research focus in phonetics or speech technology

- state-of-the-art knowledge in statistical methods or programming skills

- knowledge in generating and analyzing speech data with state-of-the-art tools

- publications

- teaching experience

- a co-operative and team oriented attitude

- an interest in spontaneous, interactive, potentially multimodal data

 

Preferable qualifications:

 

- experience in the acquisition of third party funding

 

Remuneration

 

Salary will be paid according to Remuneration level 13 of the Wage Agreement for Public Service in the Federal States (TV-L). As stipulated in § 2 (1) sentence 1 of the WissZeitVG (fixed-term employment), the contract will end after three years, In accordance with the provisions of the WissZeitVG and the Agreement on Satisfactory Conditions of Employment, the length of contract may differ in individual cases. The employment is designed to encourage further academic qualification. In principle, these full-time position may be changed into a part-time position, as long as this does not conflict with official needs.

Bielefeld University is particularly committed to equal opportunities and the career development of its employees. It offers attractive internal and external training and further training programmes. Employees have the opportunity to use a variety of health, counselling, and prevention programmes. Bielefeld University places great importance on a work-family balance for all its employees.

Application Procedure

For full consideration, your application should be received via either post (see postal address below) or email (a single PDF) document sent to alexandra.kenter@uni-bielefeld.de by January 8th, 2020. Please mark your application with the identification code: wiss19299. To apply, please provide the usual documents (CV including information about your academic education and degrees, professional experience, publications, conference contributions, and further relevant skills and abilities). The application can be written in German or English.

Further information on Bielefeld University can be found on our homepage at www.uni-bielefeld.de. Please note that the possibility of privacy breaches and unauthorized access by third parties cannot be excluded when communicating via unencrypted e-mail. Information on the processing of personal data is available https://www.uni-bielefeld.de/Universitaet/Aktuelles/Stellenausschreibungen/2019_DS-Hinweise_englisch.pdf.

Postal Address

Bielefeld University, Faculty of Linguistics and Literary Studies, Prof. Dr. Petra Wagner, P.O. Box: 10 01 31, 33501 Bielefeld, Germany

Contact

Alexandra Kenter

0521 106-3662

alexandra.kenter@uni-bielefeld.de

Back  Top

6-21(2019-12-22) Postdoctoral Researcher, University of Toulouse Jean Jaures, France

Postdoctoral Researcher ? Psycholinguistics, neurolinguistics, corpus linguistics, clinical linguistics
Full-Time Position, Fixed-term 1 year (with possibility of one year extension)

Application deadline: 05/01/2020
Starting date : 01/02/2020 (flexible)

The Octogone-Lordat Lab (University of Toulouse Jean Jaurès, France : https://octogone.univ-tlse2.fr/) offers a post-doctoral position for 1 year, with a possibility of 1 year extension.

The neuropsycholinguistic study of language processing is the major topic of our lab, focusing on typical language use, language disorders and rehabilitation processes (as in aphasia), at the intersection of linguistics, psycholinguistics and neuroscience.

The post-doc will contribute to the project ?Aphasia, Discourse Analysis and Interactions? funded by the European Regional Development Fund and the Région Occitanie - France. Strong background in linguistics, psycholinguistics or neurolinguistics, cognitive science, and in methodological skills for data collection in corpus linguistics and clinical linguistics as well are required. The post-doc will actively contribute to the development of a new database focusing on typical and atypical language in aphasia. Along with the project supervisors, the post-doc will be involved in all activities in line with the project (e. g. IRB approval, GDPR conformity, etc.), including data collection, coding and analyses from various perspectives. Attested experience with empirical and experimental methods (corpus linguistics) is appreciated, as well as a strong research interest for clinical issues. The post-doc will also coordinate trainees? and students? work involved in the project, and contribute significantly to publication of the findings. The applicant should, at least, have completed a PhD in Linguistics, Neuropsychology, Cognitive Science or related fields, and prove high proficiency level in French according to the CEFRL. Good skills in spoken and written academic English are also required.

This is a full time position starting in February 2020 (flexible).
Gross annual salary : min. ?28,000 to ?32,000 (before 15% to 25% taxes and social-security deduction, INM 528 to 564 in accordance with public sector pay scale)

The application should include a CV, a statement of motivations, a link to the PhD thesis, PhD Viva report (if available), plus 2 scanned letters of recommendation.

Deadline for application :  05/01/2020, to :
Dr. Halima Sahraoui, sahraoui@univ-tlse2.fr
Prof. Barbara Köpke, bkopke@univ-tlse2.fr

For further questions and application submission, please feel free to contact us.

Octogone-Lordat (EA 4156)
https://octogone.univ-tlse2.fr/
Université de Toulouse 2
Maison de la Recherche ? E126
5, Allées Antonio Machado
31058 Toulouse Cedex 9
France

About city life in Toulouse :
https://www.toulouse-visit.com/

Back  Top

6-22(2019-12-24) Postdoc proposal, Grenoble, France

Postdoc proposal

Lexicon Free Spontaneous Speech Recognition

using Sequence-to-Sequence Models.

December 20, 2019

1 Postdoc Subject

The goal of the project is to advance the state-of-the-art in spontaneous auto-

matic speech recognition (ASR). Recent advances in ASR show excellent per-

formances on tasks such as read speech ASR (Librispeech), TV shows (MGB

challenge), but what about spontaneous communicative speech ?

This postdoc project would leverage existing transcribed corpora (more than

300 hours) recorded in everyday communication (speech recordings inside a

family, in a shop, during an interview, etc.).

We will investigate lexicon free methods based on sequence-to-sequence ar-

chitectures and analyze the representations learnt by the models.

Research topics:

 End-to-end ASR models

 Spontaneous speech ASR

 Data augmentation for spontaneous language modelling

 Use of contextualized language models (such as BERT) for ASR re-scoring

 Analyzing representations learnt by ASR systems

2 Requirements

We are looking for an outstanding and highly motivated postdoc candidate to

work on this subject. Following requirements are mandatory:

 PhD degree in natural language processing or speech processing.

 Excellent programming skills (mostly in Python and deep learning frame-

works).

1

 Interest in speech technology and speech science

 Good oral and written communication in English (French is a plus while

not mandatory)

 Ability to work autonomously and in collaboration with other team mem-

bers and other disciplines

3 Work context

Grenoble Alpes Univ. o
ers an excellent research environment with ample com-

puting facilities, as well as remarkable surroundings to explore over the week-

ends. The postdoc project will be funded by the Grenoble Arti cial Intelligence

Institute (MIAI). The candidate will work both at LIG-lab (GETALP team)

and LIDILEM-lab. The duration of the postdoc is 18 months.

4 How to apply?

Applications should include a detailed CV; a copy of the last diploma; at least

two references (people likely to be contacted); a cover letter of one page; a

one-page summary of the PhD thesis. Applications should be sent to lau-

rent.besacier@imag.fr, solange.rossato@imag.fr and aurelie.nardy@univ-grenoble-

alpes.fr. Applications will be evaluated as they are received: the position is open

until it is lled.

Back  Top

6-23(2020-01-09) 12-month Postdoctoral research position at GIPSA Lab, Grenoble, France

12-month Postdoctoral research position in

machine learning for neural speech decoding

Place: GIPSA-lab (CNRS/UGA/Grenoble-INP) in collaboration with BrainTech laboratory (INSERM). Both

laboratories are located on the same campus in Grenoble, France.

Team: CRISSP team@GIPSA-lab (Cognitive Robotics, Interactive System and Speech processing).

Context:

This position is part of the ANR (French National Research Agency) BrainSpeak project aiming at developing a

Brain-Computer Interface (BCI) for speech rehabilitation, based on large-scale neural recordings. This post-doc

position aims at developing new machine learning algorithms to improve the conversion of neural signals into an

intelligible acoustic speech signal.

Mission:

Investigate deep learning approaches to map intracranial recordings (ECoG) to speech features (spectral,

articulatory, or linguistic features). A particular focus will be put on 1) weakly or self-supervised training in order to

deal with unlabeled, limited and sparse datasets, 2) introducing prior linguistic information for regularization (e.g.

thanks to a neural language model) and 3) online adaptation of the conversion model to cope with potential drift in

time of the neural responses.

Requirement and Profile:

PhD in machine learning, signal/image/speech processing

Advanced knowledges in deep learning

Excellent programming skills (mostly Python)

Fluent in English

Duration: 12 months

Salary (before tax) / Month €: Depending on the experience

Starting date: Early 2020

How to apply:

Send a cover letter, a resume, and references by email to:

Dr. Thomas Hueber, thomas.hueber@gipsa-lab.fr

Dr. Laurent Girin, laurent.girin@grenoble-inp.fr

Dr. Blaise Yvert, blaise.yvert@inserm.fr

Applications will be processed as they arise.

Back  Top

6-24(2020-01-10) Pre-Doc RESEARCH CONTRACT (for 3 months, extendable to one year), University odf the Basque Country, Leioa (Bizkaia), Spain
One Pre-Doc RESEARCH CONTRACT (for 3 months, extendable to one year) is open for the study, development, integration and evaluation of machine learning software tools, and the production of language resources for Automatic Speech Recognition (ASR) tasks.

Applications are welcome for one graduate (Pre-Doc) research contract for the study, development, integration and evaluation of machine learning software tools, and the production of language resources for ASR tasks. The contract will be funded by an Excellence Group Grant by the Government of the Basque Country. Initially, the contract is for 3 months but, if performance is satisfactory, it will be extended at least to one year ?or even more, depending on the available budget?, with a gross salary of around 30.000 euros/year. The workplace is located in the Faculty of Science and Technology (ZTF/FCT) of the University of the Basque Country (UPV/EHU) in Leioa (Bizkaia), Spain.

PROFILE

We seek graduate (Pre-Doc) candidates with a genuine interest in computer science and speech technology. It will be required knowledge and skills in any (preferably all) of the following topics: machine learning (specifically deep learning), programming in Python, Java and/or C++ and signal processing. A master's degree in scientific and/or technological disciplines (especially computer science, artificial intelligence, machine learning and/or signal processing) will be highly valued. All candidates are expected to have excellent analysis and abstraction skills. Experience and interest in dataset construction will be also a plus.

RESEARCH ENVIRONMENT

The Faculty of Science and Technology (ZTF/FCT) of the University of the Basque Country (https://www.ehu.eus/es/web/ztf-fct) is a very active and highly productive academic centre, with nearly 400 professors, around 350 pre-doc and post-doc researchers and more than 2500 students distributed in 9 degrees.

The research work will be carried out at the Department of Electricity and Electronics of ZTF/FCT in the Leioa Campus of UPV/EHU. The research group hosting the contract (GTTS, http://gtts.ehu.es) has deep expertise in speech processing applications (ASR, speaker recognition, spoken language recognition, spoken term detection, etc.) and language resource design and collection. If the candidate is interested in pursuing a research career, the contract would be compatible with master studies on the topics mentioned above or even a Ph.D. Thesis project within our research group, and further financing options (grants, other projects) could be explored.

The nearby city of Bilbao has become an international destination, with the Guggenheim Bilbao Museum as its main attractor. Still, though sparkling with visitors from worldwide, Bilbao is a peaceful, very enjoyable medium-size city with plenty of services and leisure options, and mild weather, not so rainy as the evergreen hills surrounding the city might suggest.

APPLICATION

Applications including the candidate's CV and a letter of motivation (at most 1 page) explaining their interest in this position and how their education and skills fit the profile should be sent by e-mail ?using the subject 'GTTS research contract APPLICATION ref. 1/2020'? to Germán Bordel (german.bordel@ehu.eus) by Wednesday, January 29, 2020. The contract will start as soon as the position is filled.
Back  Top

6-25(2020-01-14) Post-doctoral fellow, at Yamagishi-lab, National Institute of Informatics, Tokyo, Japan.

Post-doctoral fellow, at Yamagishi-lab, National Institute of Informatics, Japan.
Starting from 2020 April 1st (the starting date is negotiable), 1 year with potential for extension.
Research topic can be speech synthesis, voice conversion, speaker recognition, speaker verification, speech privacy and security, or signal processing.
Further details can be found: https://www.nii.ac.jp/en/about/recruit/2020/0110.html.

Back  Top

6-26(2020-01-15) Responsable IA, H/F, ZAION, Levallois, France

Fondée en septembre 2017, Zaion propose une solution unique et complète de relation client vocale avec sa plateforme de callbot, convaincue que la voix sera demain un axe clé de service auprès des marques.

Zaion est désormais présente dans 4 pays européens et compte une trentaine de clients grands comptes. Les callbots traitent en moyenne, plus de 15 000 appels par jour en production. Basée à Levallois, Zaion compte 35 talents.

Nous rejoindre, c?est prendre part à une aventure passionnante et innovante au sein d?une équipe dynamique, qui a l?ambition de devenir la référence sur le marché des robots conversationnels.

Pour accompagner cette croissance, nous recrutons notre Responsable de IA H/F. Manager de l?équipe R&D, votre rôle est stratégique dans le développement et l?expansion de la société. Vous travaillez sur des solutions IA, qui permettent d?automatiser le traitement des appels téléphoniques en utilisant le traitement automatique de langage naturel et la détection des émotions dans la voix.

Vos missions principales :

- Vous participez à la création du pôle R&D de ZAION et piloterez nos projets de solutions IA et solutions vocales (défections des émotions dans la voix).

- Construisez, adaptez et faites évoluer nos services de détection d?émotion dans la voix 

- Analysez de bases de données conséquentes de conversations pour en extraire les conversations émotionnellement pertinentes

- Construisez une base de données de conversations labelisées avec des étiquettes émotionnelles

- Formez et évaluez des modèles d'apprentissage automatique pour la classification d?émotion

- Déployez vos modèles en production

- Améliorez en continue le système de détection des émotions dans la voix 

Qualifications requises et expérience antérieure :

-Vous avez une expérience de 5 ans minimum comme Data Scientist/Machine Learning appliqué à l?Audio et une expérience de 2 ans en encadrement .

- Diplômé d?une école d?Ingénieur ou Master en informatique ou un doctorat en informatique mathématiques avec des compétences solides en traitements de signal (audio de préférence)

- Solide formation théorique en apprentissage machine et dans les domaines mathématiques pertinents (clustering, classification, factorisation matricielle, inférence bayésienne, deep learning...)

- La mise à disposition de modèles d'apprentissage machine dans un environnement de production serait un plus

- Vous maîtrisez un ou plusieurs des langages suivants : Python, Frameworks de machine Learning/Deep Learning (Pytorch, TensorFlow,Sci-kit learn, Keras) et Javascript

- Vous maîtrisez les techniques du traitement du signal audio

- Une expérience confirmée dans la labélisation de grande BDD (audio de préférence) est indispensable ;

- Votre personnalité : Leader, autonome, passionné par votre métier, vous savez animer une équipe en mode projet

- Vous parlez anglais couramment

Merci d?envoyer votre candidature à : alegentil@zaion.ai

 

 
 

 

 

https://www.linkedin.com/company/zaion-callbot/

ZAION

18 bis rue de Villiers

92300 Levallois

 

Back  Top

6-27(2020-01-16) 12-month Postdoctoral research position, GIPSA Lab, Grenoble,France

12-month Postdoctoral research position in

machine learning for neural speech decoding

Place:GIPSA-lab (CNRS/UGA/Grenoble-INP) in collaboration with BrainTech laboratory (INSERM). Both

laboratories are located on the same campus in Grenoble, France.

Team: CRISSP team@GIPSA-lab (Cognitive Robotics, Interactive System and Speech processing).

Context:

This position is part of the ANR (French National Research Agency) BrainSpeak project aiming at developing a

Brain-Computer Interface (BCI) for speech rehabilitation, based on large-scale neural recordings. This post-doc

position aims at developing new machine learning algorithms to improve the conversion of neural signals into an

intelligible acoustic speech signal.

Mission:

Investigate deep learning approaches to map intracranial recordings (ECoG) to speech features (spectral,

articulatory, or linguistic features). A particular focus will be put on 1) weakly or self-supervised training in order to

deal with unlabeled, limited and sparse datasets, 2) introducing prior linguistic information for regularization (e.g.

thanks to a neural language model) and 3) online adaptation of the conversion model to cope with potential drift in

time of the neural responses.

Requirement and Profile:

PhD in machine learning, signal/image/speech processing

Advanced knowledges in deep learning

Excellent programming skills (mostly Python)

Fluent in English

Duration: 12 months

Salary (before tax) / Month €: Depending on the experience

Starting date: Early 2020

How to apply:

Send a cover letter, a resume, and references by email to:

Dr. Thomas Hueber, thomas.hueber@gipsa-lab.fr

Dr. Laurent Girin, laurent.girin@grenoble-inp.fr

Dr. Blaise Yvert, blaise.yvert@inserm.fr

Applications will be processed as they arise.

Back  Top

6-28(2020-01-17) 30-month postdoctoral researcher IRISA Rennes, France
IRISA (France) is looking for a 30-month postdoctoral researcher for topic of Natural Language Processing for Kids, starting in Spring 2020.
 
The project involves:
- prediction of age recommendations for texts
- generation of explanations
- generation of textual reformulations.
 
Parners are :
- linguists: MoDyCo lab
- computer scientists: IRISA lab, company Qwant, company Synapse Développement
- specialized journalists: French newspaper Libération.

 

Back  Top

6-29(2020-01-18) Ingénieur CDD , IRIT, Toulouse, France

L'IRIT (Institut de Recherche en Informatique de Toulouse) recrute un ingénieur CDD pour le projet collaboratif LinTo (PIA - Programme d?Investissements d'Avenir), projet d?assistant conversationnel destiné à opérer en contexte professionnel pour proposer des services en lien avec le déroulement de réunions.  

Le poste à pourvoir est à l?interface des travaux réalisés par l'équipe SAMoVA (Traitement automatique de la parole et de l'audio ) et l'équipe MELODI (Traitement automatique des langues).

La personne recrutée travaillera en collaboration étroite avec les membres de l?IRIT déjà impliqués dans le projet LinTO. Elle aura un rôle d?ingénieur support que ce soit du point de vue des activités de recherche que des tâches d?intégration qui devront être, à terme, réalisées sur la plateforme LinTO développée dans le cadre du projet par la société LINAGORA porteuse du projet.

 

 

 

Informations pratiques :

Poste à pourvoir : ingénieur
Profil : master 2 informatique ou doctorat en informatique
Domaine : Traitement automatique de la parole ou du langage, apprentissage automatique
Durée: 12 mois à partir de février/mars 2020 
Lieu : Institut de Recherche en Informatique de Toulouse - (IRIT : https://www.irit.fr/) Collaboration entre les équipes SAMoVA/MELODI
Contact : Isabelle Ferrané (isabelle.ferrane@irit.fr), Philippe Muller (philippe.muller@irit.fr)

Dossier de candidature : à envoyer avant le 15 février 2020.

Détail de l'offre :  https://www.irit.fr/SAMOVA/site/assets/files/engineer/OFFRE_INGENIEUR_IRIT_LINTO.pdf

Salaire : selon la grille en vigueur et selon le profil et l'expérience

Back  Top

6-30(2020-01-19) Research engineer, Vocalid, Belmont, MA, USA

 

Speech Research Engineer @ Voice AI Startup!

Location: Belmont, MA

Available: Immediately

VocaliD is a voice technology company that creates high-quality synthetic vocal identities for applications that speak. Founded on the mission to bring personalized voices to those who rely on voice prostheses, our ground-breaking innovation is now fueling the development of unique vocal persona for brands and organizations. VocaliD’s unique voices foster social connection, build trust and enhance customer experiences. Grounded in decades of research in speech signal processing and leveraging the latest advances in machine learning and big data, we have attracted ~ $4M in funding and garnered numerous awards including 2015 SXSW Innovation, 2019 Voice AI visionary, and 2020 Best Healthcare Voice User Experience. Learn more about our origins by watching our founder Rupal Patel's TED Talk.

We are seeking a speech research engineer with machine learning expertise to join our dynamic and ambitious team that is passionate about Voice!

Responsibilities:

- Algorithm design and implementation

- Research advances in sequence to sequence models of speech synthesis

- Research advances in generative models of synthesis (autoregression, GAN, etc.)

- Implement machine learning techniques for speech processing and synthesis

- Design and implement natural language processing techniques into synthesis flow

- Conduct systematic experiments to harden research methods for productization

- Work closely with engineers to implement and deliver the final product

- Present research findings in written publications and oral presentations

Required Qualifications:

- MS or PhD in Electrical or Computer Engineering, Computer Science or related field

- Experience programming in C/C++, Python,

- Experience with machine learning frameworks (Tensorflow, Keras, Pytorch, etc.)

- Experience with Windows and Linux

- Familiarity with cloud computing services (AWS, Google Cloud, Azure)

- Must communicate clearly & effectively

- Strong analytical and oral communication skills

- Excellent interpersonal and collaboration skills

Please submit a cover letter and resume to rupal@vocaliD.ai

Visit us at www.vocalid.ai for more information about VocaliD.

Back  Top

6-31(2020-01-20) Senior and junior researchers, LumenAI, France

L?entreprise

 

LumenAI est une start-up fondée il y a 4 ans par des académiciens et spécialisée dans l?apprentissage non-supervisée séquentiel. Forte de son activité R&D, elle propose un accompagnement scientifique et technique complet à ses clients. Les domaines concernés par son activité sont pour l?instant la maintenance prédictive, la cyber-sécurité, l?indexation de documents et l?analyse des réseaux sociaux. En parallèle, LumenAI commercialise ses propres outils de online clustering et de visualisation regroupés dans la plateforme: The lady of the lake.

 

Vos missions 

 

Les tâches consistent à participer aux missions clients et à l?évolution de notre librairie de clustering « The lady of the Lake ». Enfin, 20% du temps de travail est consacré à un projet de recherche personnel lié aux activités de LumenAI.

 

Si le candidat est un profil senior, il lui sera également demandé de participer à la gestion des missions, de suivre nos data-scientists chez leurs client. Par exemple, il y a actuellement le besoin d?un encadrant pour une thèse Cifre (Rennes).

 

Si le candidat est un profil senior, il lui sera également demandé de participer à la gestion des missions et à l?encadrement d?une thèse Cifre.

 

Votre profil

 

  • Idéalement, 5 ans d?expérience d?ingénierie ou de recherche dans le machine learning, mais de très bons profils juniors sont également encouragés à postuler.

 

  • Les activités de LumenAI sont très consommatrices de Traitement Automatique de la Langue (NLP). Une expertise dans ce domaine serait un grand plus.

 

  • Appétence pour la recherche: la plupart des missions relèvent du domaine de la R&D, et nous souhaitons également développer notre lien avec la recherche académique pour concevoir de nouveaux algorithmes.

 

  • Goût de l?excellence: Il existe de nombreux acteurs se réclamant du machine learning sur le marché. LumenAI s?y différencie par la qualité de ses ingénieurs. Le candidat pourra convaincre de son expérience par ses accomplissements professionnels en tant qu?ingénieur et/ou des publications dans des conférences de machine learning de rang A.

 

  • Initiative et participation: LumenAI privilégie la hiérarchie horizontale et le développement personnel de ses membres, afin que chaque lumen puisse organiser son activité professionnelle vers ce qu?il aime. Dans cette dynamique il est attendu que chaque membre trouve de lui-même comment contribuer à l?entreprise et s?intégrer à l?équipe.

 

Autres informations et références

 

Le poste est à pourvoir dans nos locaux parisiens.

 

Rémunération:  entre 45 et 60k? brut annuel selon expérience.

 

Le site démonstration de la plateforme ?The lady of the Lake.

https://lakelady.lumenai.fr/#/ 

 

Les valeurs de LumenAI 

http://www.lumenai.fr/join-us

Back  Top

6-32(2020-01-21) Ingénieur 6 mois, GIPSA Lab, Grenoble, France

Informations générales

Référence : UMR5216-ALLBEL-017
Lieu de travail : ST MARTIN D HERES
Date de publication : vendredi 31 janvier 2020
Type de contrat : CDD Technique/Administratif
Durée du contrat : 6 mois
Date d'embauche prévue : 1 mars 2020
Quotité de travail : Temps complet
Rémunération : entre 2076.88? et 2184.44?
Niveau d'études souhaité : Bac+5
Expérience souhaitée : Indifférent

La candidature se fait sur ce lien http://bit.ly/2MZUY68

Missions

Contexte:
Les missions associées à ce poste font parties du projet ANR GEPETO (GEstures and PEdagogy of InTOnation), dont le but est d'étudier l'utilisation de gestes manuels par le biais d'interfaces humain-machine, pour la conception d'outils et méthodes permettant l'apprentissage du contrôle de l'intonation (mélodie) dans la parole.

En particulier, ce poste se place dans le contexte de la rééducation vocale, dans le cas de dégradation ou d'absence de vibration des plis vocaux chez des patients atteints de troubles du larynx. Les solutions actuelles pour remplacer cette vibration consistent à injecter une source sonore artificielle dans la bouche grâce à un électrolarynx, par-dessus laquelle l'utilisateur peut articuler normalement. Cependant, ces systèmes génèrent des signaux de mélodie relativement constante, conduisant à des voix très robotiques.

Le but du projet GEPETO à GIPSA-lab est de proposer un électrolarynx intra-oral dont l'intonation peut être contrôlée par le geste de la main, qui sera capté par diverses interfaces (tablette, accéléromètre, etc.). Il s'agira, à notre connaissance, du premier électrolarynx contrôlé par des gestes spatiaux. A terme, nous étudierons son usage pour un futur déploiement vers le monde médical.

Objectifs :
Ce poste constitue la première étape du projet, c'est-à-dire le développement du système. Il s'agira donc d'adapter un électrolarynx intra-oral du marché Ultra Voice (https://www.ultravoice.com) afin de lui rajouter la possibilité de contrôler la mélodie de la parole par le biais d'interfaces gestuelles. 

La première tâche consiste à modifier le logiciel embarqué de l'électrolarynx, dont le code source (C/C++) sera fourni, pour contrôler la source sonore envoyée dans la bouche. L'ingénieur.e devra analyser le code afin d'identifier le module de génération de la source sonore, puis modifier / re-développer ce module pour contrôler les caractéristiques de la source sonore. Il s'agira notamment d'adapter la forme d'onde de la source injectée pour pouvoir compenser la fonction de transfert de l'appareil vocal, ainsi que de contrôler la fréquence fondamentale de vibration de la source. Finalement, le code sera re-compilé et embarqué dans le système.

La deuxième tâche consiste à interfacer Ultra Voice avec divers contrôleurs gestuels disponibles au laboratoire. Ultra Voice accepte en entrée un signal analogique porteur de la fréquence fondamentale à contrôler. Il s'agira donc de développer un logiciel externe permettant l'acquisition et le traitement des données des contrôleurs gestuels pour les convertir en paramètres de fréquence fondamentale et générer le signal de contrôle attendu par Ultra Voice. Le logiciel sera développé sur la plateforme de programmation temps-réel Max/MSP qui permet une acquisition immédiate des données des contrôleurs, et la génération simple du signal analogique requis par Ultra Voice.

Activités

Activités principales:
- Réaliser l'analyse fonctionnelle des sous-systèmes et les découper en fonctions élémentaires
- Développer l'application logicielle des systèmes numériques
- Participer aux tests d'intégration et interpréter les résultats
- Assurer la gestion de configuration des outils de développement et des sous-systèmes développés
- Élaborer et rédiger les cahiers des charges et les documents techniques

Activités secondaires:
- Structurer et documenter les logiciels développés pour une réutilisation ultérieure
- Participer aux réunions d'avancement du projet et interagir avec les différents acteurs du projet

Compétences

Connaissances
- Langages de programmation C/C++ (connaissance approfondie)
- Environnement de programmation Max/MSP (connaissance souhaitée)
- Systèmes embarqués et temps-réels (connaissance générale)
- Traitement du signal (connaissance générale)
- Techniques et sciences de l'ingénieur (acoustique, mécanique, physique?)
- Langue anglaise : B1 à B2 (cadre européen commun de référence pour les langues)
- Aisance dans l'écriture de documentation technique
Les personnes n'ayant des compétences que dans certains des domaines listés ci-dessus sont toutefois encouragées a? déposer une candidature. 

Compétences opérationnelles
- Savoir traduire une commande en spécifications techniques
- Établir un diagnostic
- Résoudre des problèmes
- Appliquer les règles d'hygiène et de sécurité
- Appliquer les procédures de sécurité
- Transmettre des connaissances
- Assurer une veille
- Aptitude à travailler en équipe
- Bonne écoute des interlocuteurs et analyse de leurs demandes

Contexte de travail

Gipsa-lab est une unité de recherche mixte du CNRS, de Grenoble INP, et de l'Université de Grenoble Alpes; elle est conventionnée avec Inria et l'Observatoire des Sciences de l'Univers de Grenoble.
Fort de 350 personnes dont environ 150 doctorants, Gipsa-lab est un laboratoire pluridisciplinaire développant des recherches fondamentales et finalisées sur les signaux et systèmes complexes. Il est reconnu internationalement pour ses recherches en Automatique, Signal et Images, Parole et Cognition et développe des projets dans les domaines stratégiques de l'énergie, de l'environnement, de la communication, des systèmes intelligents, du vivant et de la santé et de l'ingénierie linguistique. 
De par la nature de ses recherches, Gipsa-lab maintient un lien constant avec le milieu économique via un partenariat industriel fort. 
Son potentiel d'enseignants-chercheurs et chercheurs est investi dans la formation au niveau des universités et écoles d'ingénieurs du site grenoblois (Université Grenoble Alpes).
Gipsa-lab développe ses recherches au travers de 12 équipes de recherche organisées en 3 départements : Automatique, Images-signal et Parole-cognition.
Elle compte 150 permanents et environ 250 non-permanents (doctorants, post-doctorants, chercheurs invités, étudiants stagiaires de master?).

L'ingénieur.e sera rattaché.e au Pôle Technique de GIPSA-lab. Il/elle travaillera avec l'équipe CRISSP (Cognitive Robotics, Interactive Systems, Speech Processing) du Pôle Parole et Cognition du laboratoire, et l'équipe MOVE (Analyse et Modélisation de l'Homme en Mouvement : Biomécanique, Cognition, Vocologie) du Pôle Science des Données.

Back  Top

6-33(2020-02-09) Fully-funded PhD position at GIPSA-lab , Grenoble, France
Fully-funded PhD position at GIPSA-lab 
(Grenoble, France) 'Automatic recognition and generation of cued speech using deep learning' in the context of the European project Comm4CHILD (http://comm4child.ulb.be) - Project description: http://comm4child.ulb.be/post/cnrs_gipsa_beautemps_esr10/gipsa_esr10/
 
Back  Top

6-34(2020-02-16) The Federal Criminal Office, Wiesbaden, Germany

The Federal Criminal Police Office (BKA) in Germany (Wiesbaden) is currently advertising a job in the field of Forensic Speaker Recognition and Audio Analysis.

Back  Top

6-35(2020-02-20) Recrutement de doctorant·e·s - projet comm4CHILD , Université libre de Bruxelles, Belgique
Recrutement de doctorant·e·s - projet comm4CHILD  
 
Le projet Marie Sklodowska-Curie Innovative Training Network (MSC ITN, grant agreement 860766) « comm4CHILD » (dir : Cécile Colin & J. Leybaert, Université libre de Bruxelles) engage 15 doctorant·e·s dans différents domaines (biologie, cognition, langage) visant à optimiser la communication et l?inclusion sociale d?enfants présentant des troubles auditifs. Les institutions d?accueil sont localisées en Belgique, en Allemagne, en France, en Angleterre, en Norvège, et au Danemark.  
 
Pour être éligible, il ne faut pas avoir résidé ou avoir mené une activité principale (travail, études) plus de 12 mois au cours des 3 années précédant le début du contrat de recherche dans le pays de l?institution d?accueil. Il faut également détenir un diplôme permettant de commencer un doctorat. Les candidat·e·s en possession du titre de docteur·e ou ayant une expérience de plus de 4 ans temps plein dans le domaine de la recherche ne peuvent pas postuler. 
 
Par ailleurs, une bonne maîtrise de la langue française est souhaitée pour les projets de recherche suivants, dans la mesure où ces projets nécessitent d?interagir avec des enfants francophones qui présentent des troubles auditifs :
 
-          Temporal course of auditory, labial, and manual signals in Cued Speech (CS) perception : an ERPs project
Institution d?accueil : Université libre de Bruxelles, Belgique (ESR 5) ; contact : ccolin@ulb.ac.be
 
 
-          A cognitive analysis of Spelling skills in children with cochlear implant from various linguistic backgrounds (sign language, cued speech) 
Institution d?accueil : Université libre de Bruxelles, Belgique (ESR 13) : contact : leybaert@ulb.ac.be and fchetail@ulb.ac.be
 
 
-          The phonological body : body movements to accompany linguistic development in deaf children 
Institution d?accueil : Centre Comprendre et Parler, Bruxelles, Belgique
Affiliation au doctorat à l?Université libre de Bruxelles, Belgique (ESR 15) ; contact : brigitte.charlier@ccpasbl.be
 
 
La date limite pour postuler est le 15 mars 2020. Les projets démarrent en Septembre-Octobre 2020 pour une durée de 36 mois.  
 
Pour plus d'informations concernant les projets de recherche et les institutions d'accueil, les critères d'éligibilité, le salaire et les avantages, et les personnes à contacter, veuillez vous rendre sur le site web du projet:  http://comm4child.ulb.be  
 
Nous nous engageons à effectuer un recrutement inclusif et nous nous efforçons à créer une équipe de doctorant·e·s équilibrée en terme de genre. Les personnes présentant un handicap, et particulièrement les personnes malentendantes, sont encouragées à déposer leur candidature.
Back  Top

6-36(2020-02-22) Fully funded PhD position in Explainable deep learning methods , Uppsala University, Sweden

** Fully funded PhD position in Explainable deep learning methods for human-human and human-robot interaction**

Department of Information Technology

Uppsala University

 

 

Interested candidates should contact Prof. Ginevra Castellano by email (ginevra.castellano@it.uu.se) by Friday 13th of March at the latest to discuss the research project.

Include the following documents in the email:

-        A CV, including list of publications (if any) and the names of two reference persons

-        Transcript of grades

-        A cover letter of maximum one page describing the scientific issues in the project that interest you and how your past experiences fit into the project

 

Summary of project’s topic

Human-human interaction (HHI) relies on people’s ability to mutually understand each other, often by making use of multimodal implicit signals that are physiologically embedded in human behaviour and do not require the sender’s awareness. When we are engrossed in a conversation, we align with our partner: we unconsciously mimic each other, coordinate our behaviours and synchronize positive displays of emotion. This tremendously important skill, which spontaneously develops in HHI, is currently lacking in robots.

This project aims at building on advances in deep learning, and in particular on the field of Explainable Artificial Intelligence (XAI), which offers approaches to increase the interpretability of the complex, highly nonlinear deep neural networks, to develop new machine learning-based methods that (1) automatically analyse and predict emotional alignment in HHI, and (2) bootstrap emotional alignment in human-robot interaction.

More information about the project can be found here.

 

Requirements

The ideal PhD candidate is a student with an MSc in Computer Science, Machine Learning, Artificial Intelligence, Robotics or related field with a broad mathematical knowledge as well as technical and programming skills. The components to be studied build on a number of mathematical techniques and the methods development involved in the project will require good command of the related areas; central are mathematical optimization and probability theory. Experience and/or interest in the social sciences are also required. See further eligibility requirements here.

 

Further information

The project is a collaboration between the Uppsala Social Robotics Lab (Prof. Ginevra Castellano) and the MIDA (Methods for Image Data Analysis) group (Dr. Joakim Lindblad) at the Department of Information Technology, and the Uppsala Child and Baby Lab (Prof. Gustaf Gredebäck) at the Department of Psychology of Uppsala University.

The student will be part of the Uppsala Social Robotics Lab at the Division of Visual Information and Interaction of the Department of Information Technology, and contribute to lab’s projects on the topic of co-adaptation in human-robot interactions. The student will also join the Graduate school of the Centre for Interdisciplinary Mathematics (CIM).

The Uppsala Social Robotics Lab’s focus is on natural interaction with social artefacts such as robots and embodied virtual agents. This domain concerns bringing together multidisciplinary expertise to address new challenges in the area of social robotics, including mutual human-robot co-adaptation, multimodal multiparty natural interaction with social robots, multimodal human affect and social behaviour recognition, multimodal expression generation, robot learning from users, behaviour personalization, effects of embodiment (physical robot versus embodied virtual agent) and other fundamental aspects of human-robot interaction (HRI). State of the art robots are used, including the Pepper, Nao and Furhat robotic platforms.

The fully funded PhD position is for four years.

 

--

Dr. Ginevra Castellano

Professor

Director, Uppsala Social Robotics Lab

https://usr-lab.com

Department of Information Technology

Uppsala University

Box 337, 751 05 Uppsala, Sweden

Webpage: http://user.it.uu.se/~ginca820/

Back  Top

6-37(2020-02-29) Web developer at ELDA, Paris, France

The European Language resources Distribution Agency (ELDA), a company specialised in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a permanent Web Developer position.

Web Developer (m/f)

Under the supervision of the technical department manager, the responsibilities of the Web Developer consist in designing and developing web applications and software tools for linguistic data management.
Some of these software developments are carried out within the framework of European research and development projects and are published as free software.
Depending on the profile, the Web Developer could also participate in the maintenance and upgrading of the current linguistic data processing toolchains, while being hands-on whenever required by the language resource production and management team.

Profile:

-    Master (BAC + 4 / BAC + 5 or higher) in Computer Science or a related field (experience in natural language processing is a strong plus)
-    Proficiency in Python
-    Hands-on experience in Django
-    Hands-on knowledge of a distributed version control system (Git preferred)
-    Knowledge of SQL and of RDBMS (PostgreSQL preferred)
-    Basic knowledge of JavaScript and CSS
-    Basic knowledge of Linux shell scripting
-    Practice of free software
-    Proficiency in French and English
-    Curious, dynamic and communicative, flexible to work on different tasks in parallel
-    Ability to work independently and as part of a multidisciplinary team
-    Citizenship (or residency papers) of a European Union country

The position is based in Paris.

Salary: Commensurate with qualifications and experience.

Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to :

ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Mail: job@elda.org

ELDA is a human-sized company (15 people) acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends.

For further information about ELDA/ELRA, visit:
http://www.elda.org

Back  Top

6-38(2020-03-03) 15 early-stage researcher positions available within the COBRA Marie Sklodowska-Curie Innovative Training Network, Berlin, Germany

15 early-stage researcher positions available within the COBRA Marie Sklodowska-Curie Innovative Training Network

A call for applications is open for 15 three-year contracts offered to early-stage researchers (ESRs) wishing to enrol as PhD students in the framework of the Conversational Brains (COBRA) project.

 

COBRA is a Marie Sklodowska-Curie Innovative Training Network funded by the European Commission within the Horizon 2020 programme. It aims to train ESRs to accurately characterize and model the linguistic, cognitive and brain mechanisms that allow conversation to unfold in both human-human and human-machine interactions.

The network comprises ten academic research centers on language, cognition and the human brain, and four industrial partners in web-based speech technology, conversational agents and social robots, in ten countries.

The partners? combined expertise and high complementarity will allow COBRA to offer ESRs an excellent training programme as well as very strong exposure to the non-academic sector.

Deadline for submission of applications: 31 March 2020

All information are available here: https://www.cobra-network.eu/

LIST OF AVAILABLE POSITIONS

ESR1: Categorization of speech sounds as a collective decision process
Recruiter & PhD enrolment at: Aix-Marseille University (France)

ESR2: Brain markers of between-speaker convergence in conversational speech
Recruiter: Italian Institute of Technology, Ferrara (Italy), PhD enrolment at: University of Ferrara

ESR3: Does prediction drive neural alignment in conversation?
Recruiter & PhD enrolment at: Aix-Marseille University (France)

ESR4: Brain indexes of semantic and pragmatic prediction
Recruiter & PhD enrolment at: Freie Universität Berlin (Germany)

ESR5: Communicative alignment at the physiological level
Recruiter & PhD enrolment at: Humboldt-Universität zu Berlin / ZAS, Berlin (Germany)

ESR6: Alignment in human-machine spoken interaction
Recruiter & PhD enrolment at: The University of Edinburgh (UK)

ESR7: Contribution of discourse markers to alignment in conversation
Recruiter & PhD enrolment at: Université catholique de Louvain, Louvain-la-Neuve (Belgium)

ESR8: Discourse units and discourse alignement
Recruiter & PhD enrolment at: Université catholique de Louvain, Louvain-la-Neuve (Belgium)

ESR9: Acoustic-phonetic alignment in synthetic speech
Recruiter & PhD enrolment: The University of Edinburgh (UK)

ESR10: Phonetic alignment in a non-native language
Recruiter: Italian Institute of Technology, Ferrara (Italy), PhD enrolment at: University of Ferrara

ESR11: Conversation coordination and mind-reading
Recruiter: IISAS, Bratislava (Slovakia), PhD enrolment at: Slovak University of Technology, Bratislava

ESR12: The influence of alignment
Recruiter: IISAS, Bratislava (Slovakia), PhD enrolment at: Slovak University of Technology, Bratislava

ESR13: Parametric dialogue synthesis: from separate speakers to conversational interaction
Recruiter: ReadSpeaker, Huis ter Heide (The Netherlands), PhD enrolment at: Helsinki University (Finland)

ESR14: Gender and vocal alignment in speakers and robots
Recruiter: Furhat Robotics, Stockholm (Sweden), PhD enrolment at: Aix-Marseille University (France)

ESR15: Endowing robots with high-level conversational skills
Recruiter: Furhat Robotics, Stockholm (Sweden), PhD enrolment at: Radboud Univ., Nijmegen (The Netherlands)

-- 
**********************************************
 
NOTE MY EMAIL ADRESS: fuchs@zas.gwz-berlin.de WILL EXPIRE SOON.
NEW E-Mail: fuchs@leibniz-zas.de
Back  Top

6-39(2020-03-07) Postdoc researcher in deepNN, University of Glasgow, UK (UPDATED)

 

APPLICATION DEADLINE EXTENDED TO 13 APRIL

 

 



The School of Computing Science at the University of Glasgow is looking for an excellent and enthusiastic researcher to join the ESRC-funded international collaborative project 'Using AI-Enhanced Social Robots to Improve Children's Healthcare Experiences.' This is a new 3-year project which aims to investigate how a social robot can help children cope with potentially painful experiences in a healthcare setting. The system developed in the project will be tested through a hospital-based clinical trial at the end of the project.

 
In Glasgow, we are looking for a researcher with expertise in applying deep neural network models to the automated analysis of multimodal human behaviour, ideally along with experience integrating such systems into an end-to-end interactive system.
 
You will be working together with Dr. Mary Ellen Foster in the Glasgow Interactive Systems Section (GIST); you will collaborate closely with Dr. Ron Petrick and his team from the Edinburgh Centre for Robotics at Heriot-Watt University, and will also collaborate with medical and social science researchers at several Canadian universities including University of Alberta, University of Toronto, Ryerson University, McMaster University, and Dalhousie University.
 
GIST provides an ideal ground for academic growth. It is the leader of a recently awarded Centre for Doctoral Training that is providing 50 PhD scholarships in the next five years in the area of socially intelligent artificial intelligence. In addition, its 7 faculty members have accumulated more than 25,000 Scholar citations and have been or are leading  large-scale national and European projects (including the ERC Advanced Grant 'Viajero', the Network Plus grant 'Human Data Interaction', the FET-Open project 'Levitate', and the H2020 project MuMMER) for a total of over £20M in the last 10 years.
 
The post is full time with funding up to 27 months in the first instance.
 
For more information and to apply online, please see https://www.jobs.ac.uk/job/BYX734/research-associate.
Please email MaryEllen.Foster@glasgow.ac.uk with any informal enquiries.
 
It is the University of Glasgow’s mission to foster an inclusive climate, which ensures equality in our working, learning, research and teaching environment.
We strongly endorse the principles of Athena SWAN, including a supportive and flexible working environment, with commitment from all levels of the organisation in promoting gender equality.
 
Back  Top

6-40(2020-03-10) Fully-funded PhD position at GIPSA-lab, Grenoble, France

Fully-funded PhD position at GIPSA-lab 
(Grenoble, France) 'Automatic recognition and generation of cued speech using deep learning' in the context of the European project Comm4CHILD (http://comm4child.ulb.be) - Project description: http://comm4child.ulb.be/post/cnrs_gipsa_beautemps_esr10/gipsa_esr10/

 

Back  Top

6-41(2020-03-15) Postdoc, Fondazione Bruno Kessler, Trento, Italy

Call for PostDoc Position

Università Politecnica delle Marche, Fondazione Bruno Kessler and PerVoice SpA, the

partners of the AGEVOLA project funded by Fondazione Caritro, are seeking for an

enthusiastic Post-doc researcher to work on advanced machine-learning based solutions for

speech enhancement and speaker diarization in call-center communications.

Pre-requisites:

PhD in Electronics Engineering, Telecommunications Engineering or Information

Engineering or eventually in Mathematics or Physics

Research experience in the field of Digital Audio Processing and Machine Learning. A

solid background in Speech Processing is appreciated.

Programming knowledge: Python, C/C++

Competence in setup and usage of the Unix/Linux SW environment, and experience

with Python libraries for Machine Learning (like Pytorch and Tensorflow)

Duration and starting date: 21 months starting from Summer 2020.

Work Location: Università Politecnica delle Marche, Ancona, Italy. The researcher will tightly

cooperate with the company PerVoice S.p.A. and the Fondazione Bruno Kessler, both

located in Trento, Italy. Therefore, some working time will be spent in Trento. The possibility

to work remotely is foreseen as well.

Gross Salary: 63.000 Euro for the entire contract duration.

Contacts:

Stefano Squartini - Università Politecnica delle Marche - s.squartini@univpm.it

Alessio Brutti - Fondazione Bruno Kessler - brutti@fbk.eu

Leonardo Badino - PerVoice S.p.A - leonardo.badino@pervoice.it

Back  Top

6-42(2020-03-25) Faculté des lettres de Sorbonne Université propose au concours plusieurs postes d'ATER

La faculté des lettres de Sorbonne Université propose au concours plusieurs postes d'ATER
(profil Intelligence Artificielle pour les Sciences Humaines) au sein de l'UFR de
sociologie et d'informatique pour les sciences humaines. 6 permanents et 5 ATER
enseignent l'informatique.

La liste et le profil des postes est disponible sur
http://www.recrutement.sorbonne-universite.fr/fr/personnels-enseignants-chercheurs-enseignants-chercheurs/enseignants-chercheurs/campagne-de-recrutement-ater/postes-ouverts-par-la-faculte-des-lettres.html

Back  Top

6-43(2020-04-02) Fully funded PhD position, Idiap Research Institute, Martigny, Switzerland

There is a fully funded PhD position open at Idiap Research Institute on Low Level
Mechanisms of Language Evolution.

Recently, Idiap has had some success in incorporating physiological processes into
backpropagation-style neural architectures. This allows the processes to be trained in
the context of the larger network whilst giving the network the capability to recognise
and reproduce physiological functions in a natural manner.

In this project, we will apply this technique to the human cochlea and its interface with
low level neural mechanisms. We are particularly interested in the efferent pathway from
the low level neurological system back to the cochlea. This will allow us to investigate
the resulting non-linear feedback relationship.

For more information, and to apply, please follow this link:
 http://www.idiap.ch/education-and-jobs/job-10284

Idiap is located in Martigny in French speaking Switzerland, but functions in English and
hosts many nationalities.  PhD students are registered at EPFL. All positions offer quite
generous salaries. Martigny has a distillery and a micro-brewery and is close to all
manner of skiing, hiking and mountain life.

There are other open positions on Idiap's main page
 https://www.idiap.ch/en/join-us/job-opportunities

Back  Top

6-44(2020-04-15) Professor (W3) on AI for Language Technologies, Germany
The Institute of Anthropomatics and Robotics within the Division 2 - Informatics, Economics, and Society - is seeking to fill, as soon as possible, the position of a
 
   Professor (W3) on AI for Language Technologies
   
The research area of the professorship are methods of artificial intelligence, especially machine learning, for the realization of intelligent systems for human-machine interaction and their evaluation in real-world applications. Possible research topics include natural language understanding, spoken language translation, automatic speech recognition, interactive and incremental learning from heterogeneous data, and the extraction of semantics from image, text and speech data. The professorship contributes to the 'Robotics and Cognitive Systems' focus of the KIT Department of Informatics.
 
In teaching, the professorship contributes to the education of students of the KIT Department of Informatics, among others in the fields of natural language understanding, neural networks, machine learning, and cognitive systems. Participation in undergraduate teaching of basic courses in computer science in the German language is expected, especially in the field of artificial intelligence. A transition period for acquiring German language skills is provided.
 
Active participation in interACT (International Center for Advanced Communication Technologies) is desired. Experience in the development of innovations in application fields of artificial intelligence, e.g. with industrial cooperation partners, is advantageous. 
 
We are looking for a candidate with outstanding scientific qualifications and an outstanding international reputation in the above-mentioned field of research. Skills in the acquisition of third-party funding and the management of scientific working groups are expected, as well as very good didactic skills both in basic computer science lectures and in in-depth courses on subjects within the research area of the professorship.
 
Active participation in the academic tasks of the KIT Department of Informatics and in the self-administration of KIT in Division II is expected as well as participation in the KIT Center Information - Systems - Technologies.
 
According to § 47 of the Baden-Wuerttemberg University Act (Landeshochschulgesetz des Landes Baden-Württemberg), a university degree, teaching aptitude and exceptional competence in scientific work are required.
 
KIT is an equal opportunity employer. Women are especially encouraged to apply. Applicants with disabilities will be given preferential consideration if equally qualified. The KIT is certified as a family-friendly university and offers part-time employment, leaves of absence, a Dual Career Service and coaching to actively promote work-life-balance.
 
Applications with the required documents (curriculum vitae, degree certificates as well as a list of publications) and a perspective paper (maximum of three pages) should be sent by e-mail, preferably compiled into a single PDF document, to dekanat@informatik.kit.edu  by 03.05.2020. For enquiries regarding this specific position please contact Professor Dr. Tamim Asfour, e-mail:  asfour@kit.edu .
 
 
Links: 
Karlsruhe Institute of Technology
 
Institute for Anthropomatics and Robotics
 
KIT Center Information, Systems, Technologies (KCIST)
Back  Top

6-45(2020-04-24) PROFESSORSHIP (junior/senior) @ KU Leuven, Belgium

PROFESSORSHIP (junior/senior) @ KU Leuven: EMBODIED LEARNING MACHINES
Since their beginnings some 50 years ago, computer vision, speech and natural language
processing, and robotics have made progress by various forms and levels of the
integration of so-called 'model-based' (e.g. system identification) and 'model-free'
(e.g. deep learning) approaches.

However, important challenges remain at the system and application level, where sound,
vision, touch, force, motion must be integrated, to reach application-driven and system
level performance goals. It is still a mostly unsolved scientific and technical question
how to integrate synergistically the mentioned progress into the perception and control
of an engineered body that is equipped with all of the mentioned sensory modalities.

The ideal candidate has proven expertise in one or more of the above-mentioned
disciplines and wants to progress the state-of-the-art in their integration.

Further details regarding this position
:https://www.kuleuven.be/personeel/jobsite/jobs/55533021?hl=en&lang=en

Back  Top

6-46(2020-04-22) 3 research associate positions at Heriot-Watt University, Edinburgh, UK

The Interaction Lab at Heriot-Watt University, Edinburgh, seeks to fill 3 research associate positions in Conversational AI and NLP within the following research areas:

 
1. Response Generation for Conversational AI
 
2. Abuse Detection and Mitigation for Conversational Agents
 
3. Controllable Response Generation
 
Closing date: 1st June
Start date: 1st September 2020 (negotiable)
Salary range: £26,715 - £30,942 (Grade 6 without PhD degree) or £32,817 - £40,322 (Grade 7 with PhD degree)
 
The positions are associated with the EPSRC-funded projects 'Designing Conversational Assistants to Reduce Gender Bias' (positions 1&2) and 'AISEC: AI Secure and Explainable by Construction? (position 3), which are both in collaboration with the University of Edinburgh and other academic and industrial partners, including the University of Glasgow, the University of Strathclyde, the Scottish Government, NEC Labs Europe, Huggingface, and the BBC.
 

For informal enquiries please contact Prof. Verena Rieser <v.t.rieser@hw.ac.uk>.
Untitled Document


Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With campuses and students across the entire globe we span the world, delivering innovation and educational excellence in business, engineering, design and the physical, social and life sciences. This email is generated from the Heriot-Watt University Group, which includes:

  1. Heriot-Watt University, a Scottish charity registered under number SC000278
  2. Heriot- Watt Services Limited (Oriam), Scotland's national performance centre for sport. Heriot-Watt Services Limited is a private limited company registered is Scotland with registered number SC271030 and registered office at Research & Enterprise Services Heriot-Watt University, Riccarton, Edinburgh, EH14 4AS.
Back  Top

6-47(2020-04-20) Fully-funded PhD studentships in Speech and Language Technologies at the University of Sheffield,UK
 
 
UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies (SLT) and their Applications 
 
Department of Computer Science
Faculty of Engineering 
University of Sheffield, UK
 
Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications
 
** Applications now open for last remaining September 2020 intake places **
 
Deadline for applications: 31 May 2020
 
Speech and Language Technologies (SLTs) are a range of Artificial Intelligence (AI) approaches which allow computer programs or electronic devices to analyse, produce, modify or respond to human texts and speech. SLTs are underpinned by a number of fundamental research fields including natural language processing, speech processing, computational linguistics, mathematics, machine learning, physics, psychology, computer science, and acoustics. SLTs are now established as core scientific/engineering disciplines within AI and have grown into a world-wide multi-billion dollar industry.
 
Located in the Department of Computer Science, at the University of Sheffield ? a world leading research institution in the SLT field ? the UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies and their Applications is a vibrant research centre that also provides training in engineering skills, leadership, ethics, innovation, entrepreneurship, and responsibility to society.
 
 
The benefits:
  • Four-year fully-funded studentship covering all fees and an enhanced stipend (£17,000 pa)
  • Generous personal allowance for research-related travel, conference attendance, specialist equipment, etc.
  • A full-time PhD with Integrated Postgraduate Diploma (PGDip) incorporating 6 months of foundational SLT training prior to starting your research project 
  • Bespoke cohort-based training programme running over the entire four years providing the necessary skills for academic and industrial leadership in the field.
  • Supervision from a team of over 20 internationally leading SLT researchers, covering all core areas of modern SLT research, and a broader pool of over 50 academics in cognate disciplines with interests in SLTs and their application
  • Every PhD project is underpinned by a real-world application, directly supported by one of our industry partners. 
  • A dedicated CDT workspace within a collaborative and inclusive research environment hosted by the Department of Computer Science
  • Work and live in Sheffield - a cultural centre on the edge of the Peak District National Park which is in the top 10 most affordable UK university cities (WhatUni? 2019).
 
About you:
We are looking for students from a wide range of backgrounds interested in Speech and Language Technologies. 
  • High-quality (ideally first class) undergraduate or masters (ideally distinction) degree in a relevant discipline. Suitable backgrounds include (but are not limited to) computer science/software engineering; informatics; AI; speech and language processing; mathematics; physics; linguistics; cognitive science; and engineering.
  • Regardless of background, you must be able to demonstrate strong mathematical aptitude (minimally to A-Level standard or equivalent) and experience of programming.
  • We particularly encourage applications from groups that are underrepresented in technology.
  • Candidates must satisfy the UKRI funding eligibility criteria. Students must have settled status in the UK and have been ?ordinarily resident? in the UK for at least 3 years prior to the start of the studentship. Full details of eligibility criteria can be found on our website.
 
Applying:
Applications are now sought for the September 2020 intake. The deadline is 31 May 2020
 
Applications will be reviewed within 6 weeks of the deadline and short-listed applicants will be invited to interview. Interviews will be held in Sheffield or via videoconference. 
 
See our website for full details and guidance on how to apply: https://slt-cdt.ac.uk 
 
For an informal discussion about your application please contact us by email at: sltcdt-enquiries@sheffield.ac.uk
 
By replying to this email or contacting sltcdt-enquiries@sheffield.ac.uk you consent to being contacted by the University of Sheffield in relation to the CDT. You are free to withdraw your permission in writing at any time.
Back  Top

6-48(2020-04-28) Fully-funded PhD in speech synthesis - University of Grenoble-Alps - France
Fully-funded PhD in speech synthesis - University of Grenoble-Alps - France
  • Expressive audiovisual speech synthesis for an embodied conversational agent

  Funding: THERADIA project funded by BPI-France with industrials (SBT, ATOS, Pertimm). Providing full salary for 3 years (2135? gross monthly) and a generous package for travel & other costs

  Deadline: applications will be considered on an ongoing basis until the position is filled

  Full details on the topics and how to apply: https://bit.ly/3cW1gy9

Back  Top

6-49(2020-04-30) Professur für Informatik (m/w/d), Karlsruhe, Germany


offer in english:   https://euraxess.ec.europa.eu/jobs/517917

DAS DUALE HOCHSCHULSTUDIUM MIT ZUKUNFT.

Die Duale Hochschule Baden-Württemberg (DHBW) zählt mit ihren derzeit rund 34.000 Studierenden (an 12 Standorten) und 9.000 kooperierenden Unternehmen und sozialen Einrichtungen zu den größten Hochschulen des Landes. Karlsruhe, in der Rheinebene zwischen Pfälzer Bergen, Vogesen und Schwarzwald gelegen, ist eine junge Großstadt im Herzen Europas. Die TechnologieRegion Karlsruhe ist eine der leistungsfähigsten Regionen Europas. Sie zählt zu den führenden Wissenschafts- und High-Tech-Standorten. Derzeit sind an der DHBW Karlsruhe rund 3.200 Studierende in den Fakultäten Wirtschaft und Technik immatrikuliert.

AN DER DHBW KARLSRUHE IST IN DER FAKULTÄT FÜR TECHNIK FOLGENDE STELLE ZUM 01.04.2021 ZU BESETZEN:

Professur für Informatik (m/w/d)

Besoldungsgruppe W2, Kennziffer: KA-5/111

Neben den Voraussetzungen des § 47 LHG sollte der*die Bewerber*in über folgende Qualifikationen verfügen:

  • Abgeschlossenes Informatikstudium und Promotion
  • Einschlägige Berufserfahrung im Bereich Informatik
  • Sehr gute Fachkenntnisse in einem der Studienschwerpunkte der Informatik: Künstliche Intelligenz, IT-Sicherheit oder Internet of Things sowie deren gesellschaftlicher Anwendung

Zu den Aufgaben gehören die Lehre, die angewandte Forschung und die Weiterbildung im Studiengang Informatik.
Neben der Begeisterung für die Lehre erwarten wir die Bereitschaft, den Studiengang fachlich und organisatorisch weiter zu entwickeln.
Für das vielseitige Tätigkeitsfeld suchen wir eine führungs- und sozialkompetente Persönlichkeit, die ihre Rolle im Umgang mit den Interessenspartnern des Studiengangs kommunikationsstark und verantwortungsvoll wahrnimmt. Es erwartet Sie ein aufgeschlossenes Team, welches wertschätzend kooperiert und den Studiengang gemeinsam weiterentwickelt.

Einstellungsvoraussetzungen
Einstellungsvoraussetzungen Vorausgesetzt werden gemäß § 47 LHG ein abgeschlossenes Hochschulstudium, besondere wissenschaftliche Befähigung (in der Regel Promotion), pädagogische Eignung sowie mindestens fünf Jahre berufspraktische Erfahrung, davon mindestens drei Jahre außerhalb des Hochschulbereichs. Der*Die Bewerber*in muss zudem bereit sein, an der wissenschaftlichen Entwicklung, insbesondere durch Forschung und wissenschaftliche Weiterbildung, teilzuhaben. Erwartet wird ein besonderes Maß an Engagement und Kooperationsbereitschaft mit den beteiligten Unternehmen und sozialen Einrichtungen sowie die Bereitschaft zur Gremienarbeit. Eine Mitwirkung in angemessenem Umfang an den übrigen Aufgaben der DHBW Karlsruhe setzen wir voraus.

Die Übernahme in ein Beamtenverhältnis auf Lebenszeit als Professor*in (W2) ist in der Regel nach dreijähriger Bewährung im Beamtenverhältnis auf Probe möglich, falls das Lebensalter bei der Einstellung 47 Jahre, bei Erfüllung besonderer Voraussetzungen 52 Jahre, nicht übersteigt. Die Stellen sind grundsätzlich teilbar. Bei Teilzeit erfolgt die Anstellung im Beschäftigtenverhältnis und wird außertariflich analog der Besoldungsgruppe W2 vergütet.

Bewerbungen von Frauen sind besonders erwünscht. Schwerbehinderte werden bei gleicher fachlicher Eignung vorrangig berücksichtigt (bitte Nachweis beifügen).

Bei Fragen zum Berufungsverfahren können Sie sich an den Dekan als Vorsitzenden der Berufungskommission oder an die Gleichstellungsbeauftragte, Frau Prof. Dr. Karin Schäfer (karin.schaefer@dhbw-karlsruhe.de), wenden.

Bitte richten Sie Ihre Bewerbung online (idealerweise in einer PDF-Datei) bis zum 13.05.2020 unter Angabe der Kennziffer an:

Duale Hochschule Baden-Württemberg Karlsruhe
Herrn Dekan Prof. Dr. Roland Küstermann
DekanatTechnik@dhbw-karlsruhe.de
www.karlsruhe.dhbw.de

Back  Top

6-50(2020-04-25) PhD at Univ. Avignon, France

Nous recherchons des candidat.es motivé.s pour une thèse en
IA/Interaction vocale sur le thème :

Transformer et renforcer pour l?apprentissage des agents conversationnels vocaux

Le challenge principal que nous souhaitons porter dans la thèse est de permettre une
adaptation sur une tâche particulière des capacités d?un modèle neuronal profond de type
Transformer pré-entraîné, notamment pour l?élaboration d?un agent conversationnel. Des
approches par transfert d?apprentissage ont déjà été initiées mais leurs résultats sont
contrastés et doivent être renforcés. Deux axes majeurs sont identifiés pour la thèse :
- élaborer des stratégies efficaces en échantillons pour permettre le recours à des
apprentissages par renforcement, en particulier dans le cadre de l?apprentissage en
continu (continual learning) d?un agent conversationnel ;
- augmenter les capacités de telles machines à faire face à des entrées bruitées, telles
que des échanges vocaux avec un utilisateur, plus naturels et comprenant de nombreux
écarts vis-à-vis de l?écrit ainsi que des erreurs liées à la transcription automatique de
parole.

Les approches envisagées reposent sur les paradigmes les
plus avancés du Machine Learning (incluant deep learning et
reinforcement learning). Le sujet et les conditions de candidature sont
détaillés dans le PDF joint.

L'acceptation du candidat sera validée par un concours au sein de
l'Ecole Doctorale 536 d'Avignon Université. Les réponses
doivent nous parvenir de préférence **avant le 15 mai**.

Back  Top

6-51(2020-04-28) 1 year post-doc/engineer position at LIA, Avignon France

*1 year post-doc/engineer position at LIA, Avignon France, in the Vocal
Interaction Group*
==================================================================
        Multimodal man-robot interface for social spaces

keywords: AI, ML, DNN, RL, NLP, dialogue (vision, robotics)

Starting job date (desired): Sept. 2020.
==================================================================
## Work description

###Project Summary

Automation and optimisation of *verbal interactions of a
socially-competent robot*, guided by its *multimodal perceptions*

Facing a steady increase in the ageing population and the prevalence of
chronic diseases, social robots are promising tools to include in the
healthcare system. Yet extant assistive robots are not well suited to
such context as their communication abilities cannot handle social
spaces (several meters and group of persons) but rather face-to-face
individual interactions in quiet environments. In order to overcome
these limitations and eventually aiming at natural man-robot
interaction, the objectives of the work will be multifold.

First and foremost we intend to leverage the rich information available
with audio and visual flows of data coming from humans to extract verbal
and non-verbal features. These features will be used to enhance the
robot's decision-making ability such that it can smoothly take speech
turns and switch from interaction with a group of people to face-to-face
dialogue, and back. Secondly online and continual learning of the
advanced system will be investigated.

Outcomes of the project will be implemented onto a commercially
available social robot (most likely Pepper or ARI) and validated with several in-situ use
cases. A large-scale data collection will complement in-situ tests to fuel further
researches. To address our overall objectives the candidates should have a good command
of deep learning techniques and tools (including reinforcement/imitation learning) and
any combination of competencies in NLP / dialogue systems, vision and robotics.

### Requirements

- Master or PhD in Computer Science, Machine Learning, Computational
  Linguistics, Mathematics, Engineering or related fields
- Expertise in NLP / Dialog systems. Strong knowledge of current NLP /
  Interactive / Speech techniques is expected. Previous experience with
  dialogue and interaction and/or vision data is a strong plus. Knowledge in vision
and/or robotics are plusses.
? Strong programming skills, Python/C++ programmer of DNN models
  (preferably with pytorch)
- Expertise in Unix environments
- Good spoken and written command of English is required. *French is
  optional.*
- Good writing skills. For post-doc publications at top venues
  (e.g., ACL, EMNLP, SigDial, NeurIPS, ICLR etc) are expected.

## Place

Bordered by the left bank of the Rhône Avignon is one of the most
beautiful city in Provence, for some time capital of Christendom in the
Middle Ages. Its past gives the
city its unique atmosphere: dozens of churches and chapels, the ?Palais
des Papes? (palace of the popes, the most important gothic palace in
Europe), the Saint-Benezet brigde (aka the « pont d?Avignon » of
worldwide fame through its commemoration by the song), and the ramparts
that still encircle the entire city, ten museums from then ancient times
to contemporary art.

Of the nearly 100k inhabitants of the city, about 10 live in the ancient
town centre surrounded by its medieval ramparts. Avignon is not only the
birthplace of the most prestigious festival of contemporary theatre,
European Capital of Culture in 2000, but also the largest city and
capital of the département of Vaucluse. The region offers a high quality
urban life at a reasonable cost. Additionally,
the region of Avignon offers the opportunity to visit numerous
monuments and natural beauty sites easily accessible in a very short
time (Marseille, Aix, Montpellier, Nice...). Avignon is the ideal destination for
discovering Provence.

LIA is the computer science lab of Avignon University:
http://lia.univ-avignon.fr.

## Conditions

Net monthly salary: 1800-2200 ? (depending on the candidate's
experience).  Basic healthcare coverage included
(https://en.wikipedia.org/wiki/Health_care_in_France).

Possibility of part-time and/or teleworking.

The position carries no direct teaching load, but if desired, teaching
BSc or MSc level courses is a possibility (paid extra hours), as is
supervision of student dissertation projects.

Initial employment is 12 months, extension possible. For engineers,
shift to a PhD position is possible.

## Applications

To apply, send the following documents *in a single PDF* to
fabrice.lefevre@univ-avignon.fr:

* Statement of research interests that motivates your application
* CV, including the list of publications if any
* Scans of transcripts and academic degree certificates
* MSc/PhD dissertation and/or any other writing samples
* Coding samples or links to your contributions to public code
  repositories, if any
* Names, affiliations, and contact details of up to three people who can
  provide reference letters for you, if any

Back  Top

6-52(2020-05-10)Researcher at GIPSA-Lab Grenoble, France
L'équipe CRISSP (Cognitive Robotics, Interactive Systems & Speech Robotics) du GIPSA-Lab recherche un(e) candidat(e) motivé(e) pour travailler sur la synthèse de parole appliquée à l'interaction face-à-face incarnée. Il(Elle) devra avoir des compétences en apprentissage automatique.
Le travail s'inscrit dans le cadre du projet THERADIA, financé par BPI-France et mené en partenariat avec des laboratoires (EMC, LIG) et des industriels (SBT, ATOS, Pertimm).
 
 
Les candidatures seront examinées de manière continue jusqu'à ce que le poste soit pourvu.
Tous les détails sur les sujets et comment postuler: https://bit.ly/3cW1gy9
 
Back  Top

6-53(2020)-05-11) Tenure-track researcher at CWI, Amsterdam, The Netherlands

We have an open position for a tenure-track researcher at CWI (https://www.cwi.nl/) within our Distributed & Interactive Systems (DIS) group (https://www.dis.cwi.nl/). 
The focus is on Human-Centered Multimedia Systems (https://www.dis.cwi.nl/research-areas/human-centered-multimedia-systems/) and/or Quality of Experience (QoE) in immersive media (https://www.dis.cwi.nl/research-areas/qoe/).

You can find details about the position and application procedure here: 
https://www.cwi.nl/jobs/vacancies/tenure-track-position-in-multimedia-systems-and-human-computer-interaction (application deadline: July 15, 2020)

If you know of any interested candidates looking for such a position, please share with them. They are welcome to get in touch with me  <p.s.cesar@cwi.nl> concerning any questions prior to any formal application process.

Back  Top

6-54(2020-05-12) Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications , UKRI Centre for doctoral training, Sheffield, UK
 

UKRI Centre for Doctoral Training (CDT) in Speech and Language Technologies (SLT) and their Applications 


Department of Computer Science

Faculty of Engineering 

University of Sheffield

 

Fully-funded 4-year PhD studentships for research in Speech and Language Technologies (SLT) and their Applications

** Applications now open for last remaining September 2020 intake places **

Deadline for applications: 31 May 2020. 

What makes the SLT CDT different:

  • Unique Doctor of Philosophy (PhD) with Integrated Postgraduate Diploma (PGDip) in SLT Leadership. 

  • Bespoke cohort-based training programme running over the entire four years providing the necessary skills for academic and industrial leadership in the field, based on elements covering core SLT skills, research software engineering (RSE), ethics, innovation, entrepreneurship, management, and societal responsibility.  

  • The centre is a world-leading hub for training scientists and engineers in SLT ? two core areas within artificial intelligence (AI) which are experiencing unprecedented growth and will continue to do so over the next decade.

  • Setting that fosters interdisciplinary approaches, innovation and engagement with real world users and awareness of the social and ethical consequences of work in this area.


The benefits:

  • Four-year fully-funded studentship covering all fees and an enhanced stipend (£17,000 pa)

  • Generous personal allowance for research-related travel, conference attendance, specialist equipment, etc.

  • A full-time PhD with integrated PGDip incorporating 6 months of foundational SLT training prior to starting your research project 

  • Supervision from a team of over 20 internationally leading SLT researchers, covering all core areas of modern SLT research, and a broader pool of over 50 academics in cognate disciplines with interests in SLTs and their application

  • Every PhD project is underpinned by a real-world application, directly supported by one of over 30 industry partners. 

  • A dedicated CDT workspace within a collaborative and inclusive research environment hosted by the Department of Computer Science

  • Work and live in Sheffield - a cultural centre on the edge of the Peak District National Park which is in the top 10 most affordable UK university cities (WhatUni? 2019).


About you:

We are looking for students from a wide range of backgrounds interested in Speech and Language Technologies. 

  • High-quality (ideally first class) undergraduate or masters (ideally distinction) degree in a relevant discipline. Suitable backgrounds include (but not limited to) computer science, informatics, engineering, linguistics, speech and language processing, mathematics, cognitive science, AI, physics, or a related discipline. 

  • Regardless of background, you must be able to demonstrate strong mathematical aptitude (minimally to A-Level standard or equivalent) and experience of programming.

  • We particularly encourage applications from groups that are underrepresented in technology.

  • Candidates must satisfy the UKRI funding eligibility criteria. Students must have settled status in the UK and have been ?ordinarily resident? in the UK for at least 3 years prior to the start of the studentship. Full details of eligibility criteria can be found on our website.


Applying:

Applications are now sought for the September 2020 intake. 

 

We operate a staged admissions process, with application deadlines throughout the year. The final deadline for applications for the remaining places is 31 May 2020. 

 

Applications will be reviewed within 6 weeks of each deadline and short-listed applicants will be invited to interview. Interviews will be held in Sheffield. In some cases, because of the high volume of applications we receive, we may need more time to assess your application. If this is the case, we will let you know if we intend to do this.

 

See our website for full details and guidance on how to apply: slt-cdt.ac.uk 


For an informal discussion about your application please contact us by email at: sltcdt-enquiries@sheffield.ac.uk


By replying to this email or contacting sltcdt-enquiries@sheffield.ac.uk you consent to being contacted by the University of Sheffield in relation to the CDT. You are free to withdraw your permission in writing at any time.
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA