ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #235  »  Jobs

ISCApad #235

Wednesday, January 10, 2018 by Chris Wellekens

6 Jobs
6-1(2017-08-03) AI-NLP Scientist at Sparted, France


 
 
AI – NLP Researcher
 
 
 
COMPANY: SPARTED is an innovative and disruptive startup that is changing the way people learn. We offer companies and organizations a unique and scalable game platform for micro learning on mobile devices. www.sparted.com
 
MISSION:  In the context of ambitious and strategic project targeting to automatically generate questions from descriptive texts in a variety of semantical contexts, the mission consists in: • Establishing a state of the art of capabilities of NLP regarding the project • Designing and identifying the main milestones of a long-term research plan working collaboratively with a research lab  • Contributing to the development of proofs of concepts with our team • Spreading Machine Learning knowledge inside the team and train our engineers to Natural Language Processing specificities • Designing and administrating flexible, scalable datasets • Contributing to the company vision
 
REQUIRED SKILLS: • Depth and breadth of knowledge in Machine Learning and NLP • Experience solving analytical problems using quantitative approaches • Ability to manipulate and analyze data from varying sources • Knowledge in some programming language
PROFILE: You have or you’re about to defend a PhD in Machine learning, AI, Applied Mathematics, Data Science, Statistics , Computer Science or related technical field with a strong focus on Natural
Language Processing and related technologies such as pattern recognition, sequence-to-sequence models, word2vec, etc. You like taking up challenges, teamwork and building awesome products.
 
BONUS SKILLS THAT WOULD BE GREAT: • French language  • Experience in Text Mining • Beer pong master • Demonstrated experience with Natural Language Technologies and engines  • Familiarity with Semantic Technologies • White Russian expertise • Extreme skiing or Wingsuit practice
 
REMUNERATION: Depending on your status, talent and experience (€ 30K - xxK) Internship, PhD project (CIFRE) or other
 
LOCATION: The position is based in Paris, avenue Kléber, next to the Place de l’Etoile, in our so cool top floor premises with an exceptional view on Paris and the Eiffel tower.
 
JOIN THE FUN DISRUPTION: SPARTED is at war. The fun rebellion is running against the empire of boredom. SPARTED is strengthening, growing and completing its forces by recruiting superheroes, in addition to an exceptional 20 people team of world-class best developers, project managers, designers and other performers.
 
Fun disruptors are men and women, enthusiastic personalities involved in the future, smart cookies and bold people, wanting to continuously evolve, improve themselves and change the world. If you’re made of the right stuff, this position is the opportunity to join a daring project in a successful start-up with a strong identity and mindset, with great career prospects.
 
APPLICATION: Say us hello at start@sparted.com +33 6 52 14 86 9

Back  Top

6-2(2017-08-10) 3 job offers at Laboratoire de Phonétique et Phonologie, Paris, France (1phD and 2 postdocs)

Three job offers at Laboratoire de Phonétique et Phonologie, Paris, France (1phD and 2 postdocs)

 

In the context of the MoSpeeDi project, funded by the FNS and run in collaboration with the University of Geneva, the HUG and IDIAP,  the Laboratoire de Phonétique et Phonologie* (LPP ? CNRS/Sorbonne Nouvelle) in Paris is offering 3 positions (1 phD and 2 postdocs) starting November 2017 or later.

 

The general goal of the project is to better understand the processes and representations at play during speech production, focusing on the latest stages of the process where an encoded linguistic message is transformed into articulated speech. At the interface between linguistic and motor processes, these stages are also associated with different breakdowns considered as Speech Motor Disorders (dysarthria and apraxia of speech).

 

Articulatory, acoustic and speech behaviour data will be experimentally collected and analysed for both healthy speakers and speakers with speech motor disorders in order to (a) improve the characterization of phonetic speech planning and motor speech programming, (b) to identify phonetic and speech behavior markers of these processes, and (c) to better isolate and classify speech alterations in speech motor disorders. 

 

Within this Swiss-French collaborative project, the LPP is recruiting 1 doctoral student and 2 post-docs to join the team under the supervision of Cécile Fougeron (DR CNRS) and in close collaboration with Simone Falk (MC Sorbonne Nouvelle), and Leonardo Lancia (CR CNRS). See job description and application procedure below.

 

 

Contact: cecile.fougeron@univ-paris3.fr

 

 

 

* The Laboratoire de Phonétique et Phonologie (LPP) is a joint CNRS and Université Sorbonne Nouvelle structure.  It is located in the center of Paris (19 rue des Bernardins, 7500 Paris).

More information on http://lpp.in2p3.fr/Presentation-du-LPP

Back  Top

6-3(2017-08-16) 2 Post-Doc/Research positions in Audio Music Content Analysis at IRCAM, Paris, France

2 Post-Doc/Research positions in Audio Music Content Analysis
Project: 'Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances'
Place: IRCAM & L2S, Paris, France
Duration: 2 years and 1 year
Start: October 1st, 2017
Salary: according to background and experience


IRCAM ( www.ircam.fr ) and L2S at University Paris Saclay ( www.l2s.centralesupelec.fr ) are jointly offering two new PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ). The project gathers six different universities across four countries (USA, UK, Germany, France).

The planned dates for the project are: 1 Oct 2017 - 30 Sep 2019.

* About the 'Dig that Lick' project: *
The recorded legacy of jazz spans a century and provides a vast corpus of data documenting its development. Recent advances in digital signal processing and data analysis technologies enable automatic recognition of musical structures and their linkage through metadata to historical and social context. Automatic metadata extraction and aggregation give unprecedented access to large collections, fostering new interdisciplinary research opportunities. This project aims to develop innovative technological and music-analytical methods to gain fresh insight into jazz history by bringing together renowned scholars and results from several high-profile projects. Musicologists and computer scientists will together create a deeper and more comprehensive understanding of jazz in its social and cultural context. We exemplify our methods via a full cycle of analysis of melodic patterns, or licks, from audio recordings to an aesthetically contextualised and historically situated understanding.
More information can be found at: dig-that-lick.eecs.qmul.ac.uk


* Position description: *
Dig that Lick relies on audio music content analysis algorithms. IRCAM & L2S are looking for 2 Post-Doctoral Research assistants (starting Oct. 1st, 2017) for the development of MIR algorithms:
? one 24-month post-doctoral position to develop robust algorithms for automatic melody extraction (AME) from audio signals
? one 12-month post-doctoral position to develop robust algorithms for automatic analysis of harmonic and metrical structure from audio signals

* Required profiles: *
? High skills in audio signal processing and in machine learning (the candidate should preferably hold a PHD in one of these fields)
? High skills in Matlab and/or Python programming
? Good knowledge of Linux, Windows, Mac-OS environments
? High productivity, methodical works, ability to work independently and in a creative way, excellent programming style.
? Strong communication skills in English (French is not required)
? Interest/background in Jazz music will be appreciated

The hired researchers will collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports). Strong interactions with teams from the other participating universities are expected.

* Application Procedure / More information: *
Applications must include a detailed CV with a list of publications and a motivation letter. Applications and any suitable information are to be emailed to: H. Papadopoulos [ircam.l2s.digthatlick@gmail.com] and G. Peeters [peeters at ircam dot fr]





Back  Top

6-4(2017-08-29) Several post-doctoral positions at IDIAP, Martigny, Switzerland

Amongst several openings at Idiap in general, we have *three* post-doctoral positions in
the areas of multilingual speech processing and cross-lingual indexing:
 http://www.idiap.ch/en/join-us/job-opportunities
They are funded by EU and US projects, so there will be plenty of opportunity to
collaborate with partners in the EU and US.

Idiap has well established speech and NLP groups.  We are located in a wine region in the
Swiss Alps, a couple of hours out of Geneva.  The lab functions in English, the locality
in French.

--

Back  Top

6-5(2017-08-31)Trois offres d'emploi au Laboratoire de Phonétique de l'Université de Mons, Belgique
'***************************************************************************
Trois offres d'emploi sont disponibles au Laboratoire de Phonétique de
l'Université de Mons, Belgique.
(1) Assistant(e) sous mandat
(2) Orthophoniste
(3) Chercheur (doctoral et post-doctoral)
 
***************************************************************************
 
(1) Offre d?emploi : « Assistant(e) sous mandat »
 
Le service de métrologie et sciences du langage (laboratoire de
phonétique) de l'Université de Mons recherche des candidats à l'engagement
dans un poste d'assistant sous mandat à partir du début de l'année
académique 2017-2018.
 
PROFIL DU CANDIDAT (M/F):
 
Les candidats (M/F) seront détenteurs d'un diplôme de niveau Master
(obtenu à l'issue d'études comptant pour un minimum de 300 crédits au sens
ECTS) ou jugé équivalent. Ils seront en mesure de se prévaloir:
· de compétences en rapport avec les thématiques d'enseignement du
Service (Voir : 
age/Pages/default.aspx)
· d'une formation initiale susceptible de leur permettre l?accès aux
études doctorales organisées par la Faculté de psychologie et des sciences
de l?éducation (accès direct aux diplômés du domaine Sciences
psychologiques et de l'éducation, accès sur dossier aux autres diplômés) ;
le sujet précis de la thèse de doctorat (centrée sur les sciences de la
parole) sera arrêté de commun accord avec le Service.
 
Les candidats se caractériseront, complémentairement, par:
· de bonnes aptitudes au travail en équipe, de la créativité, de
l?autonomie, une curiosité scientifique aigue,
· une bonne maîtrise de l?outil informatique (tableurs, gestion de bases
de données, traitements de texte et autres éléments de suites
bureautiques),
· une maîtrise suffisante de la langue française à titre de langue de
travail au quotidien,
· une maîtrise de l?anglais permettant les échanges scientifiques à l?oral
en contexte international et la production d?écrits scientifiques.
 
Des connaissances dans le domaine du traitement des phénomènes
acoustiques, une formation préalable en phonétique, la maîtrise d?outils
d?analyse de la parole (tels Praat, Childes,?), une sensibilité à la
problématique des langues étrangères, la maîtrise d?outils statistiques,
des compétences en programmation, la possession d?un permis B et d?un
moyen de transport personnel constituent des atouts complémentaires.
 
OPERATIONS DE RECRUTEMENT:
 
Les personnes intéressées sont priées d?adresser, pour le 11 septembre
2017 au plus tard, un dossier comportant :
· une lettre de motivation,
· un curriculum vitae (comportant adresse mail et numéro de téléphone
susceptibles d?être utilisés en vue d?une éventuelle prise de contact),
· les relevés de notes de chaque année d?études supérieures,
· tout document jugé utile.
au format pdf (exclusivement)
 
Après une première phase d?évaluation des candidatures sur dossier, un
sous-ensemble des candidats sera retenu en vue d?une seconde phase
procédant par entretien de sélection :
· Les candidats retenus pour la seconde phase en seront prévenus par mail
et/ou téléphone le 12
septembre 2017.
· Les entretiens se dérouleront le 15 septembre 2017.
 
Prise de fonction : possible à partir d?octobre 2017.
 
DESCRIPTION DE FONCTION:
 
Type  de contrat : Mandat de 2 ans renouvelable 2 fois avec possibilité
éventuelle, sur justification expresse, de mandats exceptionnels
ultérieurs (maximum 4) d?une durée d?un an chacun.
 
Participation aux activités du service dans ses missions :
· d?enseignement
· de recherche
· de service
Préparation d?une thèse de doctorat
 
***************************************************************************
 
(2) Offre d?emploi : « Orthophoniste »
 
 
Poste de spécialiste de l?étude clinique et du traitement des troubles du
langage
 
Le service de métrologie et sciences du langage (laboratoire de
phonétique) de l?Université de Mons recherche, en vue de l?engagement dans
un poste à mi-temps intitulé « logopède », un spécialiste (M/F) de l?étude
clinique et du traitement des troubles du langage.
 
PROFIL DU POSTE:
 
La personne recrutée prend part aux travaux en clinique du langage que
développe
le Laboratoire. Ceux-ci peuvent être de nature :
· pédagogique (travaux pratiques au sein de cours centrés sur le langage :
sesspécificités, ses difficultés, ses troubles, les actions évaluatives et
curatives qui le concernent)
· scientifique (participation aux recherches du service)
· clinique (impliquant l?action directe au profit de patients dans une
relation à caractère évaluatif et/ou curatif).
 
La possession d?un permis B et d?un moyen de transport personnel
constituent des atouts complémentaires.
 
Compétences :
· Aptitude au travail en équipe, créativité, autonomie, curiosité
scientifique.
· Bonne rmaîtrise de l?outil informatique (tableurs, gestion de bases de
données,
traitements de texte et autres éléments de suites bureautiques).
· Maîtrise native de la langue française à l?écrit et à l?oral.
· Maîtrise au moins passive de l?anglais scientifique au moins à l?écrit.
 
Diplômes requis :
Secteur de formation initiale :
Sciences du langage (orthophonie, logopédie, psychologie du langage,?) à
titre de formation de base ou à tout le moins de formation complémentaire
Niveau à l?entrée :
· soit niveau « bachelier » professionnalisant (bac+3, 180 crédits au sens
ECTS),
· soit niveau « master » (bac+5, 300 crédits au sens ECTS).
 
Expérience professionnelle requise :
Une expérience clinique, des connaissances en phonétique, la maîtrise
d?outil d?analyse de la parole (tels Praat, Childes,?), une sensibilité à
la problématique des langues étrangères.
 
Type de contrat : CDD de 12 mois avec possibilité de renouvellement en CDI
Régime horaire : Mi-temps (19h/semaine)
Grade de recrutement : 1er agent spécialisé (si bachelier) / Attaché (si
master) 
Salaire brut minimum proposé : 1er agent spécialisé : 1119,36 ? / Attaché
: 1487,44 ?
Date de prise de fonction : Possible à partir d?octobre 2017
 
DEPOT DES CANDIDATURES:
 
Les personnes intéressées sont priées d?adresser, pour le 11 septembre 2017
au plus tard, un dossier comportant :
· une lettre de motivation,
· un curriculum vitae (comportant adresse mail et numéro de téléphone
susceptibles d?être utilisés en vue d?une éventuelle prise de contact),
· les relevés de notes de chaque année d?études supérieures,
· tout document jugé utile. au format pdf (EXCLUSIVEMENT) à l?adresse :
 
Procédure de recrutement :
Après une première phase d?évaluation des candidatures sur dossier, un
sous-ensemble des candidats sera retenu en vue d?une seconde phase
procédant par entretien de sélection :
· Les candidats retenus pour la seconde phase en
seront prévenus par mail et/ou téléphone le 12 septembre 2017.
· Les entretiens se dérouleront le 15 septembre
2017.
 
 
***************************************************************************
 
(3) Chercheurs
 
Le service de métrologie et sciences du langage (laboratoire de phonétique
de l?Université de Mons), recherche un spécialiste (M/F) de l?étude de la
parole humaine désireux d?apporter son concours au développement du projet
BIOVOC, dans sa partie centrée sur les effets sur le langage et la parole
de la pression situationnelle (charge cognitive, fatigue, émotions, etc.)
exercée sur le sujet humain par les tâches de conduite de processus
complexes (conduite de véhicules terrestres, pilotage d?aéronefs, gestion
de flux aéronautiques, etc.).
 
Deux profils de poste sont disponibles dans ce contexte.
 
**********PROFIL 1 : Chercheur doctorant***********
 
PROFIL DU CANDIDAT (M/F) :
 
Secteur de formation initiale : Sciences du langage (linguistique,
logopédie, psychologie du langage,?) à titre de formation de base ou à
tout le moins de formation complémentaire approfondie.
 
Niveau à l?entrée : Au moins niveau « master » (« bac+5 », 300 crédits au
sens ECTS).
 
Compétences transversales:
·      Aptitude au travail en équipe, créativité, autonomie, curiosité
scientifique.
·      Bonne maîtrise de l?outil informatique (tableurs, gestion de bases
de données, traitements de texte).
·      maîtrise suffisante de la langue française à titre de langue de
travail au quotidien,
·      maîtrise de l?anglais permettant les échanges scientifiques à
l?oral en contexte international et la production d?écrits scientifiques.
 
Une sensibilité à la problématique des langues étrangères, la maîtrise
d?autres langues, des connaissances en phonétique (en particulier
l?analyse objective de la parole), des connaissances et des compétences en
mathématiques appliquées (en particulier en statistiques), constituent des
atouts complémentaires.
 
Track record:
La personne recrutée peut se prévaloir de mentions égales ou supérieures à
la « grande distinction » en bachelier et en master.
 
Profil fonctionnel:
La personne recrutée se voit octroyer une bourse de doctorat d?un an.
Elle s?engage à tenter d?obtenir, durant ce terme, un financement plus
étendu en concourant pour des postes tels que chercheur FRIA, FRESH et/ou
aspirant FNRS.
Au terme de la première année, en cas d?absence du financement alternatif
ainsi recherché, la personne est évaluée par le comité d?accompagnement.
Sur cette base, elle peut se voir octroyer ?ou non- un terme additionnel
d?un an. Durant ce nouveau terme, elle s?engage à poursuivre sa recherche
de financements extérieurs de son doctorat.
En aucun cas, la durée cumulée des bourses de recherche doctorale ne peut
excéder 4 années.
Prise de fonctions : au plus tôt.
 
 
*******PROFIL 2 : Chercheur en séjour post-doctoral******
 
PROFIL DU CANDIDAT (M/F) :
Secteur de formation initiale : Sciences du langage (linguistique,
logopédie, psychologie du langage,?) à titre de formation de base ou à
tout le moins de formation complémentaire approfondie.
 
Niveau à l?entrée : Doctorat
 
Compétences transversales:
·      Aptitude au travail en équipe, créativité, autonomie, curiosité
scientifique.
·      Bonne maîtrise de l?outil informatique (tableurs, gestion de bases
de données, traitements de texte).
·      Maîtrise suffisante de la langue française à titre de langue de
travail au quotidien.
·      Maîtrise de l?anglais permettant les échanges scientifiques à
l?oral en contexte international et la production d?écrits scientifiques.
 
Une sensibilité à la problématique des langues étrangères, la maîtrise
d?autres langues, des
connaissances en phonétique (en particulier l?analyse objective de la
parole), des connaissances et des compétences en mathématiques appliquées
(en particulier en statistiques), constituent des atouts complémentaires.
 
Track record:
La personne recrutée peut se prévaloir de hautes mentions en bachelier et
en master et, le cas échéant, d?une mention ou d?une évaluation chiffrée
élevées pour le doctorat. Son rapport de thèse atteste ses qualités
scientifiques de haut niveau. Elle a à son actif, un nombre jugé
significatif de publications scientifiques.
 
Profil fonctionnel:
La personne retenue est engagée pour une durée maximale de post-doc de 18
mois.
Durant son séjour post-doc, le chercheur (M/F) travaille à la co-rédaction
d?articles scientifiques susceptibles d?être publiés dans des revues du
domaine à haut facteur d?impact.
Les articles ainsi produits sont signés par le chercheur en premier
auteur, le chef de service en dernier auteur, et les autres publiants en
position intermédiaire, dans un ordre consensuellement déterminé.
Un minimum de 4 publications est attendu pour un séjour correspondant à la
durée maximale.
Un premier contrat à durée déterminée de six mois est établi. Moyennant
évaluation satisfaisante par le Service, ce contrat peut être prolongé.
Durant son séjour post-doctoral, le chercheur peut, moyennant accord du
service, entamer des démarches visant à son maintien au sein du service
après la fin de son séjour post-doctoral ; il bénéficie alors du soutien
du service dans ses démarches auprès des bailleurs de fonds scientifiques
(par exemple, au FNRS : chercheur post-doc, chargé de recherche, chercheur
qualifié, etc.).
 
Prise de fonctions : au plus tôt.
 
 
OPERATIONS DE RECRUTEMENT
 
Les personnes intéressées sont priées d?adresser, pour le 11 septembre
2017 au plus tard, un dossier comportant :
· une lettre de motivation,
· un curriculum vitae (comportant adresse mail et numéro de téléphone
susceptibles d?être utilisés en vue d?une éventuelle prise de contact),
· le rapport de soutenance de thèse,
· les publications jugées comme les plus significatives
· les relevés de notes de chaque année d?études supérieures,
· tout document jugé utile au format pdf (exclusivement) à l?adresse :
 
Après une première phase d?évaluation des candidatures sur dossier, un
sous-ensemble des candidats sera retenu en vue d?une seconde phase
procédant par entretien de sélection :
·      les candidats retenus pour la seconde phase en seront prévenus par
mail et/ou
téléphone le 12 septembre 2017.
·      les entretiens se dérouleront le 15 septembre 2017.
 
PROJET SCIENTIFIQUE
 
Des concepts tels que fatigue ou  stress sont fréquemment évoqués tant en
sciences de la vie qu?en sciences humaines. Ils se caractérisent non
seulement par le fait que leur portée s?étend aussi bien à la biochimie
qu?au psychisme humains, mais surtout par l?idée qu?une action sur
l?esprit peut ici avoir des répercussions sur le corps et vice-versa. Ces
notions demeurent pourtant variablement définies et diversement
objectivées, comme l?est aussi, dans ce contexte, l?interaction entre
physiologie et psychisme.
 
Le projet BIOVOC vise à élucider ces relations complexes en étudiant
l'évolution conjointe, dans une approche intra-sujets, de trois types de
variables: (i) des variables situationnelles (tant à variation invoquée
qu?à variation provoquée); (ii) des marqueurs biologiques de l'état du
sujet humain (approche métabonomique et bio-marqueurs spécifiques
enregistrés au sein de divers biofluides); (iii) les mesures révélatrices
du traitement du langage par le sujet (gestion de la parole en émission et
en réception). 
Les contextes dans lesquels sont recueillies les observations sont ceux du
contrôle par le sujet humain de processus complexes, spécialement en
aéronautique, domaine hautement générateur de « situations-problèmes »
potentiellement suscitatrices de phénomènes.
 
Les recherches à mener dans le cadre du poste offert se centrent plus
particulièrement sur les variables situationnelles et cible spécifiquement
celles qui sont liées aux langues utilisées par le sujet. On sait
aujourd?hui que la réalité physique des sons de parole est notamment
influencée par divers facteurs qui s?y rapportent. Ceux-ci peuvent être
liés à l?inscription communautaire du sujet (on peut par exemple citer les
variabilités diatopique, diastratique, ou diachronique), voire aux actions
exogènes visant délibérément à modifier les caractéristiques phoniques des
sons de parole, par exemple dans les contextes de
l?enseignement/apprentissage (et/ou de l?utilisation) de langues
non-maternelles. D?autres déterminants, de nature endogène au sujet,
peuvent également se manifester, qu?ils trouvent leur origine dans la
sphère cognitive (maîtrise des langues, expertise multilingue, etc.), soit
dans la sphère affective (posture personnelle par rapport aux langues
utilisées). 
 
Ces travaux qui, à des titres divers, démontrent l?action de ces facteurs
endogènes sur les productions vocales, ouvrent la voie à un positionnement
moins descriptif mais plus centré sur la valeur indicielle des
observations effectuées: puisque ces facteurs ont un effet sur le signal
vocal, la détection de leurs marques dans le signal ouvre la voie d?une
caractérisation, par la seule analyse des productions phoniques, de l?état
du locuteur. Le secteur des transports, et en particulier celui de
l?aéronautique, s?est montré graduellement plus intéressé par ces
perspectives lors des dernières décennies ; à un moment où nombre
d?incidents ou d?accidents y sont aujourd?hui imputables au facteur humain
plus qu?à des défauts techniques, le développement de recherches
susceptibles de contribuer à l?élaboration de systèmes d?alerte propres à
détecter des altérations de la fonctionnalité du pilote et ce, sur la base
de variations du seul signal vocal, fait figure de défi stratégique. Si
plusieurs recherches ont certes démontré l?intérêt de ces perspectives,
force est cependant de constater que les résultats en sont extrêmement
diversifiés, voire parfois contradictoires. Ceci
s?explique probablement d?une part par l?insuffisance du volume de données
global recueilli et d?autre part par l?importante diversité méthodologique
qui caractérise le champ. De ce point de vue, trois dimensions
apparaissent nécessiter une attention particulière. D?une part, ces
recherches, sont d?ordinaire restreintes à des productions vocales
anglophones, laissent dans l?ombre les autres langues de communication
aéronautique et, ipso facto, négligent les interactions possibles entre le
facteur langue et les divers facteurs étudiés ; peu, par ailleurs,
prennent en considération le caractère fréquemment multilingue des
communications aéronautiques et le fait que, souvent, les agents sont
amenés à s?exprimer dans une langue non maternelle ; aucune, par ailleurs,
n?interroge le problème de la perte différentielle de compétence phonique
en L2 et en L1 sous l?effet des conditions adverses de communication.
 
Les opérations de recherce à mener dans le cadre du poste offert se
centreront en conséquence tant sur les effets exercés par les variables
situationnelles liées aux situations de contrôle de processus complexes
sur la performance multilingue que sur les effets de divers types de
multilinguismes sur l?efficacité du contrôle de processus complexes gérés
en contextes multilingues.
 
 
***************************************************************************
Dr. Véronique Delvaux
Chercheur qualifié FNRS
Chargée de cours UMONS
Service de Métrologie et Sciences du Langage
Local ?1.7, Place du Parc, 18, 7000 Mons
+3265373140
age/Pages/VeroniqueDelvaux.aspx
Back  Top

6-6(2017-09-01) TheVoice project - PhD Offers at IRCAM, Paris, France

TheVoice project - PhD Offers

TheVoice project (2017-2021) funded by the French Research National Agency (ANR) proposes two PhD thesis. TheVoice project aims to create voices for audiovisual production in the field of the creative, cultural, and entertainment industry. The scientific objective of the project is to study the voices of professional actors, naturally expressive, in order to create innovative voice design solutions. The consortium, composed of recognized laboratories and industrialists (Ircam, LIA, Dubbing Brothers), aims to consolidate a position of excellence for “Made-in-France” research and digital technologies, and to promote the French culture all over the the world.

Deep learning for voice recommendation

The objective of the thesis is to create a voice recommendation system based on deep neural networks, by exploiting the entire “vocal palette” of professional actors and integrating information related to acoustics, perception (what the listener perceives, without context), and reception (what the spectator perceives “in situation” in a movie depending on his social and cultural expectations).

 

Contacts: jean-francois.bonastre@univ-avignon.fr, Nicolas.Obin@ircam.fr

Expressive voice identity conversion

 

The objective of the thesis is to create a voice identity conversion system able to reproduce the voice of professional actors from naturally expressive and real acted conditions, by exploiting the audio tracks of movies, series, etc. The thesis will be based on the long-term experience in voice analysis and transformation and the existing voice conversion system developed at Ircam, currently used for professional productions.

Contact: Nicolas.Obin@ircam.fr



Candidates must have a master degree in computer science (or equivalent) with skills in audio signal processing, machine learning, and informatics (python, C++). A preliminary experience in speech processing would be greatly appreciated.

Applications (CV + motivation letters) must be sent before: 15/09/2017

Back  Top

6-7(2017-09-09) Positions at Reykjavik University's School of Science and Engineering, Iceland

Applications are invited for a research position in text to speech systems at the Language and Voice Laboratory (lvl.ru.is) at Reykjavik University's School of Science and Engineering. The position is sponsored by the Icelandic Language Technology fund grant 'Environment for building text-to-speech synthesis for Icelandic.' The main aim of the work will be to set up and advance research on speech synthesis back-end architectures for parametric speech synthesis. The successful candidate will work closely with the other members of the lab who focus on the language specific problems such as text normalization, phonemic analysis and phrasing. Even though the main objective of the work will be on Icelandic, working with other languages will be welcome. The successful candidate will contribute to the academic output of the lab as well as the publication of an open TTS environment for Icelandic.
 
Skills and qualifications
        ? MSc/PhD degree in applied mathematic, statistics, computational engineering or computer science is preferable
        ? Knowledge of mathematical modeling is preferrable (e.g. differential equations, neural networks, linear systems)
        ? Good skills in applying and analysing algorithms
        ? Good knowledge of Linux is preferable
        ? Ability to use shell scripts is advantageous (bash, awk, sed)
        ? Good programming skills are necessary (e.g. C++, Java or Python)
 
Fixed-term
The funds for this post are available for 24 months in the first instance.
 
Further information regarding the position provides Jón Guðnason (jg@ru.is), associate professor at School of Science and Engineering, and Anna Björk Nikulásdóttir, research specialist at School of Science and Engineering. Interviews will start September 14th, but applications received after that date will be taken into consideration if the position has not been filled by December 1st 2017. Applications must be submitted on the Reykjavík University website, here below. All inquiries and applications are treated as confidential.

http://radningar.hr.is/storf/ViewJobOnWeb.aspx?jobid=3005

Back  Top

6-8(2017-09-09) PhD position in Computational Linguistics for Ambient Intelligence, University Grenoble Alpes, France

Keywords: Natural language understanding, decision support system, smart
home

The Laboratoire d'Informatique de Grenoble (LIG) of the University
Grenoble Alpes, Grenoble, France invites applications for a PhD position
in Computational Linguistics for Ambient Intelligence.

University of Grenoble Alpes is situated in a high-tech city located at
the heart of the Alps, in outstanding scientific and natural
surroundings. It is 3h by train from Paris ; 2h from Geneva and is less
than 1h from Lyon international airport.

The position starts in September 2017 and ends in July 2020 and is
proposed in the context of the national project Vocadom
(http://vocadom.imag.fr/) whose aim is to build technologies that make
natural hand-free speech interaction with a home automation system
possible from anywhere in the home even in adverse conditions
[Vacher2015].

The aim of the PhD will be to build a new generation of situated spoken
human machine interaction where uttered sentences by a human are
understood within the context of the interaction in the home. The
targeted application is a distant speech hand free and ubiquitous voice
user interface to make the home automation system react to voice
commands [Chahuara2017]. The system should be able to process possibly
erroneous outputs from an ASR system (Automatic Speech Recognition) to
extract meaning related to a voice command and to decide about which
command to execute or to send a relevant feed-back to the user. The
challenge will be to constantly adapt the system to new lexical phrases
(no a priori grammar), new situations (e.g., unseen user, context) and
change in the house (e.g., new device, device out of order). In this
work, we propose to extend classical S/NLU (Natural Language
Understanding) approaches by including non-linguistic contextual
information in the NLU process to tackle the ambiguity and borrow
zero-shot learning techniques [Ferreira2015] to extend the lexical space
on-line. Reinforcement learning is targeted to adapt the models to the
user(s) all along the use of the system [Mnih2015].  The candidate will
be strongly encouraged to publish their progress to the main events of
the field (ACL, Interspeech, Ubicomp).  The PhD candidate will also be
involved in experiments including real smart-home and real users
(elderly people and people with visual impairment) [Vacher2015].

REFERENCES :
[Mnih2015] Mnih, Kavukcuoglu et al.  Human-level control through deep
reinforcement learning. Nature 518, 529?533

[Chahuara2017] Chahuara, F. Portet, M. Vacher Context-aware decision
making under uncertainty for voice-based control of smart home Expert
Systems with Applications, Elsevier, 2017, 75, pp.63-79.

[Ferreira2015] E Ferreira, B Jabaian, F Lefevre Online adaptative
zero-shot learning spoken language understanding using word-embedding
Acoustics, Speech and Signal Processing (ICASSP), 2015

[Vacher2015] M. Vacher, S. Caffiau, F. Portet, B. Meillon, C. Roux, E.
Elias, B. Lecouteux, P. Chahuara. Evaluation of a context-aware voice
interface for Ambient Assisted Living: qualitative user study vs.
quantitative system evaluation. ACM - Transactions on Speech and
Language Processing, Association for Computing Machinery, 2015,
pp.5:1-5:36.

JOB REQUIREMENTS AND QUALIFICATIONS

- Master?s degree in Computational Linguistics or Artificial
Intelligence (Computer Science can also be considered)
- Solid programming skills,
- Good background in machine learning,
- Excellent English communication and writing skills,
- Good command of French (mandatory),
- Experience in experimentation involving human participants would be a
   plus
- Experience in dialogue systems would be a plus plus

Applications should include:

- Cover letter outlining interest in the position
- Names of two referees
- Curriculum Vitae (CV) (with publications if applicable)
- Copy of the university marks (grade list)

and be sent to michel.vacher@imag.fr and francois.portet@imag.fr

Research Group Website : http://getalp.imag.fr
Research project website : http://vocadom.imag.fr/

Back  Top

6-9(2017-09-09) New PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ).

IRCAM ( www.ircam.fr ) and L2S at University Paris Saclay ( www.l2s.centralesupelec.fr ) are jointly offering two new PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ). The project gathers six different universities across four countries (USA, UK, Germany, France).

The planned dates for the project are: 1 Oct 2017 - 30 Sep 2019.

* About the 'Dig that Lick' project: *
The recorded legacy of jazz spans a century and provides a vast corpus of data documenting its development. Recent advances in digital signal processing and data analysis technologies enable automatic recognition of musical structures and their linkage through metadata to historical and social context. Automatic metadata extraction and aggregation give unprecedented access to large collections, fostering new interdisciplinary research opportunities. This project aims to develop innovative technological and music-analytical methods to gain fresh insight into jazz history by bringing together renowned scholars and results from several high-profile projects. Musicologists and computer scientists will together create a deeper and more comprehensive understanding of jazz in its social and cultural context. We exemplify our methods via a full cycle of analysis of melodic patterns, or licks, from audio recordings to an aesthetically contextualised and historically situated understanding.
More information can be found at: dig-that-lick.eecs.qmul.ac.uk


* Position description: *
Dig that Lick relies on audio music content analysis algorithms. IRCAM & L2S are looking for 2 Post-Doctoral Research assistants (starting Oct. 1st, 2017) for the development of MIR algorithms:
? one 24-month post-doctoral position to develop robust algorithms for automatic melody extraction (AME) from audio signals
? one 12-month post-doctoral position to develop robust algorithms for automatic analysis of harmonic and metrical structure from audio signals

* Required profiles: *
? High skills in audio signal processing and in machine learning (the candidate should preferably hold a PHD in one of these fields)
? High skills in Matlab and/or Python programming
? Good knowledge of Linux, Windows, Mac-OS environments
? High productivity, methodical works, ability to work independently and in a creative way, excellent programming style.
? Strong communication skills in English (French is not required)
? Interest/background in Jazz music will be appreciated

The hired researchers will collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports). Strong interactions with teams from the other participating universities are expected.

* Application Procedure / More information: *
Applications must include a detailed CV with a list of publications and a motivation letter. Applications and any suitable information are to be emailed to: H. Papadopoulos [ircam.l2s.digthatlick@gmail.com] and G. Peeters [peeters at ircam dot fr]

Back  Top

6-10(2017-09-06) Research positions at Language and Voice Laboratory, Reykjavik, Iceland

Applications are invited for a research position in text to speech systems at the Language and Voice Laboratory (lvl.ru.is) at Reykjavik University's School of Science and Engineering. The position is sponsored by the Icelandic Language Technology fund grant 'Environment for building text-to-speech synthesis for Icelandic.' The main aim of the work will be to set up and advance research on speech synthesis back-end architectures for parametric speech synthesis. The successful candidate will work closely with the other members of the lab who focus on the language specific problems such as text normalization, phonemic analysis and phrasing. Even though the main objective of the work will be on Icelandic, working with other languages will be welcome. The successful candidate will contribute to the academic output of the lab as well as the publication of an open TTS environment for Icelandic.
 
Skills and qualifications
        ? MSc/PhD degree in applied mathematic, statistics, computational engineering or computer science is preferable
        ? Knowledge of mathematical modeling is preferrable (e.g. differential equations, neural networks, linear systems)
        ? Good skills in applying and analysing algorithms
        ? Good knowledge of Linux is preferable
        ? Ability to use shell scripts is advantageous (bash, awk, sed)
        ? Good programming skills are necessary (e.g. C++, Java or Python)
 
Fixed-term
The funds for this post are available for 24 months in the first instance.
 
Further information regarding the position provides Jón Guðnason (jg@ru.is), associate professor at School of Science and Engineering, and Anna Björk Nikulásdóttir, research specialist at School of Science and Engineering. Interviews will start September 14th, but applications received after that date will be taken into consideration if the position has not been filled by December 1st 2017. Applications must be submitted on the Reykjavík University website, here below. All inquiries and applications are treated as confidential.

http://radningar.hr.is/storf/ViewJobOnWeb.aspx?jobid=3005

Back  Top

6-11(2017-09-08) 2 Post-Doc/Research positions in Audio Music Content Analysis at IRCAM Paris France

2 Post-Doc/Research positions in Audio Music Content Analysis
Project: 'Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances'
Place: IRCAM & L2S, Paris, France
Duration: 2 years and 1 year
Start: October 1st, 2017
Salary: according to background and experience


Dear colleagues, dear friends,

IRCAM ( www.ircam.fr ) and L2S at University Paris Saclay ( www.l2s.centralesupelec.fr ) are jointly offering two new PostDoc positions for our current international project Dig that Lick: Analyzing large-scale data for melodic patterns in jazz performances ( dig-that-lick.eecs.qmul.ac.uk ). The project gathers six different universities across four countries (USA, UK, Germany, France).

The planned dates for the project are: 1 Oct 2017 - 30 Sep 2019.

* About the 'Dig that Lick' project: *
The recorded legacy of jazz spans a century and provides a vast corpus of data documenting its development. Recent advances in digital signal processing and data analysis technologies enable automatic recognition of musical structures and their linkage through metadata to historical and social context. Automatic metadata extraction and aggregation give unprecedented access to large collections, fostering new interdisciplinary research opportunities. This project aims to develop innovative technological and music-analytical methods to gain fresh insight into jazz history by bringing together renowned scholars and results from several high-profile projects. Musicologists and computer scientists will together create a deeper and more comprehensive understanding of jazz in its social and cultural context. We exemplify our methods via a full cycle of analysis of melodic patterns, or licks, from audio recordings to an aesthetically contextualised and historically situated understanding.
More information can be found at: dig-that-lick.eecs.qmul.ac.uk


* Position description: *
Dig that Lick relies on audio music content analysis algorithms. IRCAM & L2S are looking for 2 Post-Doctoral Research assistants (starting Oct. 1st, 2017) for the development of MIR algorithms:
? one 24-month post-doctoral position to develop robust algorithms for automatic melody extraction (AME) from audio signals
? one 12-month post-doctoral position to develop robust algorithms for automatic analysis of harmonic and metrical structure from audio signals

* Required profiles: *
? High skills in audio signal processing and in machine learning (the candidate should preferably hold a PHD in one of these fields)
? High skills in Matlab and/or Python programming
? Good knowledge of Linux, Windows, Mac-OS environments
? High productivity, methodical works, ability to work independently and in a creative way, excellent programming style.
? Strong communication skills in English (French is not required)
? Interest/background in Jazz music will be appreciated

The hired researchers will collaborate with the development team and participate in the project activities (evaluation of technologies, meetings, specifications, reports). Strong interactions with teams from the other participating universities are expected.

* Application Procedure / More information: *
Applications must include a detailed CV with a list of publications and a motivation letter. Applications and any suitable information are to be emailed to: H. Papadopoulos [ircam.l2s.digthatlick@gmail.com] and G. Peeters [peeters at ircam dot fr]

Back  Top

6-12(2017-09-11) Research Scientist - Speech at Mitsubishi Electric Research Laboratories (MERL), Cambridge, MA, USA


      Research Scientist - Speech at Mitsubishi Electric Research Laboratories (MERL),

                                                 in  Cambridge, MA, USA

MERL's Speech & Audio Team is seeking an exceptional researcher in the areas of audio, speech, and language processing to work on robust acquisition, recognition and understanding. We are looking for candidates with a strong background in advanced machine learning techniques for speech and language processing.

Our research focuses on cutting-edge projects such as speech and sound separation, deep-learning-based microphone array processing, end-to-end speech recognition, and audio-visual question answering. As a member of our team, you will conduct original research and contribute to ongoing initiatives within the team.

Responsibilities

  • Conduct innovative research aiming to redefine the cutting edge in speech and audio processing.
  • Publish results in major international conferences and peer-reviewed journals.
  • Maintain an influential presence in the academic community, for example via participation in technical and organizing committees.
  • Assist in the patenting of developed technology and its transfer to R&D labs in Japan.

 Qualifications

  • Ph.D. in Computer Science, Electrical Engineering, or a closely related field.
  • Thorough knowledge and expertise in speech technologies.
  • Strong publication record demonstrating innovative research achievements.
  • Knowledge and experience with relevant machine learning and optimization techniques.
  • Excellent programming skills, particularly in Python and deep learning toolkits.

Apply online at: https://merl.workable.com/jobs/516915/candidates/new

Back  Top

6-13(2017-09-14) Post-doctoral position in NLP at Loria, Nancy, France

        
Post-doctoral position in NLP


Loria a computer science lab in Nancy - France, has 12 months funded full-time post-Doctoral researcher position starting on October 2017. The post-doctoral position is funded by AMIS (Access Multilingual Information OpinionS), a Chist-Era project (http://deustotechlife.deusto.es/amis/).
The topic of the post-doc is the automatic comparison of multilingual opinions in videos. Two videos in two different languages concerning the same topic have to be compared. One of the video is summarized and translated to the language of the second one. This last one is summarized and then the opinions of the two original videos are compared in terms of emotional labels such as: anger, disgust, fear, joy, sadness, surprise... They should be compared also in terms of basic sentiments.
Social network will be used in order to reinforce the analysis of the contents in terms of opinions and sentiments.
AMIS group will make available the summary of videos in terms of text. The candidate will work on NLP, but skills in video analysis will be appreciated.
The applicant will contribute also to other tasks in collaboration with other partners of AMIS project. The successful candidate will join the SMarT research team, he will be supervised by Prof. Kamel Smaïli, Dr D. Langlois and Dr D. Jouvet. The applicant will work also with Dr O. Mella and Dr D. Fohr.
Location: Loria - Nancy (France)  Duration: October 2017 – September 2018  Net salary: from 1800 Euros to 2400 Euros per month.
The ideal applicant should have:

 
A PhD in NLP, opinion and sentiment mining or other strongly related discipline.

A very solid background in statistical machine learning.

Strong publications.        

Solid programming skills to conduct experiments.        

Excellent level in English. 

Applicants should send to smaili@loria.fr: 

A CV        

A cover letter outlining the motivation       

Three most representative papers 

Back  Top

6-14(2017-09-28) Senior research engineer at Spitch AG, Zurich, Switzerland

The company:

    Spitch AG (www.spitch.ch) is an international company focused on speech technologies based in Zurich, with offices in London, Moscow, Madrid, and Milan. The current position is for the Zurich headquarters.

  • Job brief:

    We are looking for a Senior Research Engineer to improve our acoustic modeling and voice biometrics technologies. You will be an active member of the core R&D team and will help to develop training recipes, optimize the related existing processes and tools, and bring new technologies to the company.

    Your responsibility within the team will increase gradually: you will begin with understanding the basic technology of the company and using it to generate acoustic and speaker verification models for internal projects, and, after some adaptation time, you will work on our core technology to keep it at the state-of-the-art level.

    In this role, you should be able to work independently and also to discuss problems and solutions with the rest of the team. You should have excellent organization and problem-solving skills.

  • Responsibilities:
    • Acoustic modeling / speaker verification tools improvement and training recipes enhancement
    • Training and testing models for the company final products
    • Researching the latest modeling techniques
    • Cooperating in customer projects
    • Documenting all remarkable experiments
    • Interpreting data and analyzing results using statistical techniques
    • Providing technical guidance to move research prototypes to production system
  • Required Skills:
    • Graduate degree (Masters or PhD) in Computer Science, Electrical Engineering, Applied Mathematics or equivalent preferred
    • Strong background in Digital Signal Processing and neural networks
    • Solid programming skills
    • 2+ years experience in Python / Perl
    • Knowledge of C/C++
    • Extensive experience in Linux environments
    • Business level proficiency in English
    • Ability to follow existing workflow
  • Desired Skills:
    • Proven working experience in software development
    • 2+ years experience in speaker recognition
    • 2+ years experience in acoustic modeling for speech recognition
    • Experience with open source projects
  • Contract details:

    This is a permanent position, with a desired start date in November.

  • Contact:

    If you are interested in this offer and believe you are a good match, please send us a copy of your updated CV to hr@spitch.ch

Back  Top

6-15(2017-10-03) Assistant professor in phonetics/laboratory phonology, University of Delaware, USA

JOB AD FALL 2017

 

University of Delaware seeks an assistant professor in phonetics/laboratory phonology

The Department of Linguistics and Cognitive Science at the University of Delaware invites applications for a full-time tenure-track faculty position at the rank of Assistant Professor in the area of phonetics and/or laboratory phonology.  The position will commence September 1, 2018.  The successful candidate will have a strong research program in phonetics and/or laboratory phonology and will demonstrate a commitment to excellence in teaching.  Candidates must be able to teach graduate and undergraduate courses in phonetics and ideally in phonology as well.  A secondary specialization in sociolinguistics, endangered languages, first or second language acquisition, communication disorders, or a related field is desirable.

Applicants for this position should have a PhD in Linguistics or related field, or should expect to complete their degree requirements prior to appointment. Applicants should apply on-line at http://apply.interfolio.com/ and should submit a letter of application, a curriculum vitae, a statement describing their research program, a statement of teaching experience and philosophy (including evidence of teaching ability if available), and sample publications.  They should also arrange for submission of at least three letters of recommendation. We particularly welcome applications from members of underrepresented minorities. Review of applications will begin on December 1, 2017; for fullest consideration, all application materials should be submitted by that date.  Questions should be directed to Irene Vogel at: ivogel@udel.

The University of Delaware combines leadership in research with a commitment to undergraduate and graduate education.  The main campus in Newark, Delaware provides the amenities of a vibrant college town with convenient access to the major cities of the East Coast.  The Department of Linguistics and Cognitive Science (https://www.lingcogsci.udel.edu) offers a PhD in Linguistics, a Master's degree in Linguistics and Cognitive Science, a Bachelor of Science degree in Cognitive Science (with a concentration in Pre-Professional Speech-Language Pathology), and a Bachelor of Arts degree in Linguistics. The Department is comprised of 11 full-time faculty and enrolls approximately 30 PhD students, 25 MA students, 300 undergraduate majors and 60 minors.


The University of Delaware is an equal opportunity/affirmative action employer and Title IX institution.  Employment offers will be conditioned upon successful completion of a criminal background check.  A conviction will not necessarily exclude an applicant from employment.  For the University's complete non-discrimination statement, please visit http://www.udel.edu/home/legal-notices/

Back  Top

6-16(2017-10-05) Language Resources Project Manager - Junior (m/f), ELDA, Paris, France

The European Language resources Distribution Agency (ELDA), a company specialized in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a Language Resources Project Manager ? Junior position. This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Engineering field.

Language Resources Project Manager - Junior (m/f)

Under the supervision of the Language Resources Manager, the Language Resources Project Manager ? Junior will be in charge of the identification of Language Resources (LRs), the negotiation of rights in relation with their distribution, as well as the data preparation, documentation and curation.

 The position includes, but is not limited to, the responsibility of the following tasks:

  • Identification of LRs and Cataloguing
  • Negotiation of distribution rights, including interaction with LR providers, drafting of distribution agreements, definition of prices of language resources to be integrated in the ELRA catalogue or for research projects
  • LR Packaging within production projects
  • Data preparation, documentation and curation

 Profile:

  • PhD in computational linguistics or similar fields
  • Experience in managing NLP tools
  • Experience in project management and participation in European projects, as well as practice in contract and partnership negotiation at an international level, would be a plus
  • Good knowledge of script programming (Perl, Python or other languages)
  • Good knowledge of Linux
  • Dynamic and communicative, flexible to combine and work on different tasks
  • Ability to work independently and as part of a team
  • Proficiency in English, with strong writing and documentation skills. Communication skills required in a French-speaking working environment
  • Citizenship of (or residency papers) a European Union country

 

All positions are based in Paris. Applications will be considered until the position is filled.

Salary is commensurate with qualifications and experience.
Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to:

ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Fax : 01 43 13 33 30
Mail job@elda.org

ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends.

For further information about ELDA and ELRA, visit:
http://www.elra.info

Back  Top

6-17(2017-10-14) Postdoc Amélioration des outils vocaux d'évaluation de la prononciation en apprentissage de langues, Loria, Nancy, France
Postdoc Amélioration des outils vocaux d'évaluation de la prononciation en apprentissage de langues
 
Lieu : LORIA (Nancy, France)
Equipe : MULTISPEECH
Durée : 16 mois
Début : Hiver 2018
Contact :
Slim Ouni  slim.ouni@loria.fr
Denis Jouvet  denis.jouvet@loria.fr
 
Contexte
MULTISPEECH étudie différents aspects de la modélisation de la parole, tant pour la reconnaissance de la parole que pour la synthèse de la parole. Les approches développées mettent en ?uvre du traitement du signal et des modèles statistiques. Les modélisations les plus récentes reposent sur les réseaux de neurones et l?apprentissage profond qui ont apporté des gains substantiels de performance dans de nombreux domaines.
 
Les technologies vocales peuvent également servir pour l?apprentissage de langues. L?objectif consiste alors à détecter les défauts de prononciation des apprenants (prononciation des sons et intonation), à poser des diagnostics et à aider l?apprenant à améliorer sa prononciation en lui fournissant des informations multimodales (textuelles, sonores et visuelles). Plusieurs projets collaboratifs récents ont porté sur ce thème et ont permis de constituer des corpus de parole d?apprenants, d?analyser la parole non-native d?apprenants et d?approfondir la fiabilité de retours automatiques vers l?apprenant.
 
Dans le cadre du projet collaboratif e-FRAN METAL qui porte sur l?utilisation du numérique dans l?éducation, ces techniques seront adaptées, enrichies, et mises en oeuvre pour aider à l?apprentissage d?une langue étrangère à l?école. Des expérimentations sont prévues dans des classes de collège et de lycée.
 
Missions
Les travaux menés porteront sur l?amélioration et le développement d?outils vocaux pour l?évaluation des prononciations, tant au niveau des sons qu?au niveau de l?intonation. Un point important à étudier et à approfondir concerne la fiabilité des traitements et des mesures effectués (e.g., durée des sons issues de la segmentation phonétique, et valeurs de la fréquence fondamentale), et la prise en considération de ces informations de fiabilité des mesures dans l?établissement des diagnostics sur les défauts de prononciation, et les retours vers les apprenants.
 
Après une adaptation des outils et modèles au contexte non-natif des apprenants d?une langue étrangère, la plus grande partie du projet sera consacrée à des aspects plus innovants, et concernera l?étude d?approches à base d?apprentissage profond pour la détection de défauts de prononciation, et l?estimation des incertitudes sur les mesures faites (durées des sons et valeurs de la fréquence fondamentale) afin d?assurer la fiabilité des diagnostics effectués.
 
Profil et compétences recherchées
- Connaissances en traitement de la parole, reconnaissance de la parole, ou synthèse de la parole
- Connaissance, voire maitrise d?un toolkit de reconnaissance de la parole
- Expérience des réseaux de neurones, et si possible maitrise d?un toolkit de réseaux de neurones
- Bonnes compétences en informatique, et en programmation
 
L?annonce complète se trouve ici :
Français:
 
English
Back  Top

6-18(2018-10-14) Ingénieur Consolidation et adaptation des outils vocaux d'aide à l'apprentissage de langues, Loria, Nancy ,France
Ingénieur  Consolidation et adaptation des outils vocaux d?aide à l?apprentissage de langues
 
 
Lieu : LORIA (Nancy, France)
Equipe : MULTISPEECH
Durée : 12 mois (prolongation possible)
Début : Automne 2017

Contact :
Slim Ouni  slim.ouni@loria.fr
 
 
Contexte
MULTISPEECH étudie différents aspects de la modélisation de la parole, tant pour la reconnaissance de la parole que pour la synthèse de la parole. Les approches développées mettent en ?uvre du traitement du signal et des modèles statistiques. Les modélisations les plus récentes reposent sur les réseaux de neurones et l?apprentissage profond qui ont apporté des gains substantiels de performance dans de nombreux domaines.
 
Les technologies vocales peuvent également servir pour l?apprentissage de langues. L?objectif consiste alors à détecter les défauts de prononciation des apprenants (prononciation des sons et intonation), à poser des diagnostics et à aider l?apprenant à améliorer sa prononciation en lui fournissant des informations multimodales (textuelles, sonores et visuelles). Plusieurs projets collaboratifs récents ont porté sur ce thème et ont permis de constituer des corpus de parole d?apprenants, d?analyser la parole non-native d?apprenants et d?approfondir la fiabilité de retours automatiques vers l?apprenant.
 
Dans le cadre du projet collaboratif e-FRAN METAL qui porte sur l?utilisation du numérique dans l?éducation, ces techniques seront adaptées, enrichies, et mises en oeuvre pour aider à l?apprentissage d?une langue étrangère à l?école. Des expérimentations sont prévues dans des classes de collège et de lycée.
 
Missions
Dans ce cadre, la première mission consistera à consolider les outils vocaux d?aide à l?évaluation de prononciations, et à les adapter à l?utilisation prévue dans le projet. Cela nécessitera la collecte de voix d?adolescents (correspondant aux niveaux ciblés pour les expérimentations en collège et lycée) et l?adaptation des modèles acoustiques aux voix d?adolescents. Compte tenu des outils informatiques disponibles dans les classes, un mode de fonctionnement client-serveur sera privilégié.
 
La suite des travaux portera sur le développement de la version globale du système d?apprentissage de la prononciation et son expérimentation dans des classes de collège et de lycée. Le système développé devra intégrer des aspects de présentation d?exemples, d?évaluation de prononciations et de retours vers l?apprenant sur la qualité des prononciations.
 
Profil et compétences recherchées
- Connaissances en traitement de la parole, reconnaissance de la parole, ou synthèse de la parole
- Connaissance, voire maitrise d?un toolkit de reconnaissance de la parole
- Bonnes compétences en informatique, et en programmation
 
Annonce complète se trouve ici :
Français :
 
English 
Back  Top

6-19(2017-10-17) Call for Multiple PhD positions in Human-Machine Interaction funded by the ANIMATAS Innovative Training Network
Call for Multiple PhD positions in Human-Machine Interaction funded by the ANIMATAS Innovative Training Network 

ANIMATAS (MSCA ? ITN ? 2017 - 765955 2) is a H2020 Marie Sklodowska Curie European Training Network funded by Horizon 2020 (the European Union?s Framework Programme for Research and Innovation), coordinated by Université Pierre et Marie Curie (Paris, France). 
 
ANIMATAS partners are: UPMC (coord.), Uppsala Univ., Institut Mines Telecom, KTH, EPFL, INESC-ID, Jacobs Univ. and SoftBank Robotics Europe.

Scientific and technical objectives

ANIMATAS focuses on the following objectives:

1) Exploration of fundamental questions relating to the interconnections between robots and virtual characters? appearance, behaviours and perception by people

2) Development of new social learning mechanisms that can deal with different types of human intervention and allow robots and virtual characters to learn in an unconstrained manner 

3) Development of new approaches for robots and virtual characters? personalised adaptation to human users in unstructured and dynamically evolving social interactions 
 
 
Multiple Positions in Human-Machine Interaction: 
 
15 early-stage researchers (ESR) positions of 36 months are available within ANIMATAS. The successful candidates will participate in the network?s training activities offered by the European academic and industrial participating teams. The successful candidate will be have the opportunity to work with Interactive Robotics, Furhat Robotics, Mobsya, University of Wisconsin- Madison,  University of Southern California, Immersion SAS, IDMind, Trinity College Dublin. 
 
 
Details and specific deadlines are available at http://animatas.isir.upmc.fr/ 
 
ESR 1 - Social context effects on expressive behaviour of embodied systems
Contact: Arvid Kappas (Jacobs Uni)
 
ESR 2 - Modeling communicative behaviours for different roles of pedagogical agents
Contact: Catherine Pelachaud (UPMC)
 
ESR 3 - Modeling trust in human-robot educational interactions
Contact: Ginevra Castellano (UU)
 
ESR 4 -  Synthesis of Multi-Modal Socially Intelligent Human-Robot Interaction
Contact: Amir Pandey (SBR)
 
ESR 5 - Socially compliant behaviour modelling for artificial systems and small groups of teachers and learners
Contact: Christopher Peters (KTH)
 
ESR 6 - Teacher orchestration of child-robot interaction
Contact: Pierre Dillenbourg (EPFL)
 
ESR 7 - Which mutual-modelling mechanisms to optimize child-robot interaction
Contact: Pierre Dillenbourg (EPFL)
ESR 9 - Learning from and about humans for robot task learning
Contact: Mohamed Chetouani (UPMC)
mohamed.chetouani@upmc.fr  
 
ESR 10 - Let?s Learn by Collaborating with Robots
Contact: Francisco Melo (INESC-ID)
 
ESR 11 -  Disfluencies and teaching strategies in social interactions between a pedagogical agent and a student
Contact: Chloé Clavel (IMT)
 
PhD 12 - Automatic assessment of engagement during multi-party interactions
Contact: Pierre Dillenbourg (EPFL)
 
ESR 13 -  Automatic synthesis and instantiation of proactive behaviour by robot during human robot interaction: going beyond just being active or reactive
Contact: Amit Pandey (SBR)
 
ESR 14 - Socio-affective effects on the degree of human-robot cooperation
Contact: Ana Paiva (INESC-ID)
 
ESR 15 - Adaptive self-other similarity in facial appearance and behaviour for facilitating cooperation between humans and artificial systems
Contact: Christopher Peters (KTH).
chpeters@kth.se 

Requirements:

To apply, candidates must submit their CV, a letter of application, two letters of reference and academic credentials to the recruitment committee, Mohamed Chetouani (network coordinator), Ana Paiva and Arvid Kappas at contact-animatas@listes.upmc.fr and to the main supervisor of the research project of interest.

Please note that different deadlines of applications occur and are detailed here : http://animatas.isir.upmc.fr/  
 
Reviewing and selection of applications will start in October 2017. The positions will remain open until filled. 
The application procedure will be carried out in compliance with the Code of Conduct for Recruitment of the European Charter and Code for Researchers.


Contacts:
Mohamed CHETOUANI (ANIMATAS Coord.)
 
Emily MOTU (Project Manager)
 
Back  Top

6-20(2018-10-25) PhD vacancy at Aalborg University, Denmark

PhD Stipend in Low-resource Keyword Spotting for Hearing Assistive Devices at Aalborg University, Denmark

Job description: Manual operation of hearing assistive devices is cumbersome in various situations. With advances in machine learning and speech technology, voice interfaces due to their convenience will be widely deployed for hearing assistive devices and they can be personalized and offer richer functionalities. Hearing assistive devices are characterized by strict memory and computational complexity constraints and the fact that they are expected to operate flawlessly, even in acoustically challenging situations. This PhD project aims to develop personalized, noise-robust and low-resource voice control systems for hearing assistive devices, using microphone signals and other modalities.

The successful applicant must have a Master degree in machine learning, statistical signal processing, speech processing or acoustic signal processing, and have extensive knowledge in one or more of these disciplines.

You may obtain further information from Professor Zheng-Hua Tan (phone: +45 9940 8686, email: zt@es.aau.dk) or Professor Jesper Jensen (phone: +45 9940 8630, email: jje@es.aau.dk).

For more details, please refer to http://www.stillinger.aau.dk/vis-stilling/?vacancy=936714.

Back  Top

6-21(2017-10-27) Postdoc in ASR and Language Modeling, Aalto University, Finland

Postdoctoral researcher in Speech Recognition and Language Modeling

 

The speech recognition group (led by Prof. Mikko Kurimo) at Aalto University, Finland, focuses on machine learning in automatic speech recognition (ASR) and language modeling. The group developed a state-of-the-art fixed vocabulary ASR system already in 1970's and an unlimited vocabulary neural phonetic typewriter in 1980's led by Academician Teuvo Kohonen. Since 2000 led by Prof. Mikko Kurimo, the group has developed state-of-the-art unlimited vocabulary ASR systems using sub-word language models for several languages. The most recent achievement is winning the 3rd Multi-Genre Broadcast ASR challenge in 2017, where the top research groups in the field were challenged to build a recognizer for an under-resourced language using machine learning methods.

The speech recognition group consists of a couple of senior researchers, post-docs and six PhD students that bring together expertise from many relevant areas such as acoustic modeling, lexical modeling, language modeling, decoding, machine learning, machine translation, user interfaces, and toolkits such as Kaldi, AaltoASR, TheanoLM, VariKN, and Morfessor. We operate in a well-connected academic environment using excellent GPU and CPU computing facilities and have a functional office space at Aalto University Otaniemi campus that is only 10 minutes away from downtown Helsinki via the new subway line.

We are now looking for a 1-3 year postdoc to start asap on any of our research themes:

  • DNN-based continuous speech recognition (a central theme in many projects)

  • unlimited vocabulary language modelling based on letters and unsupervised morphemes

  • speech recognition for indexing and captioning of broadcast videos in multilingual domain (3-year European project starting in January)

  • speech recognition for games and second language pronunciation training

The position requires a relevant doctoral degree in CS or EE, skills for doing excellent research in an (English-speaking) group, and outstanding research experience in at least one of the research themes mentioned above. More specifically, programming skills and good command of Kaldi and DNNs (either Theano or TensorFlow toolkits) will be useful. The candidate is expected to perform high-quality research and participate in the supervision of the PhD students. The application, CV, list of publications, references and requests for further information should be sent by email to Prof. Mikko Kurimo (mikko.kurimo at aalto.fi).

Aalto University is a new university created in 2010 from the merger of the Helsinki University of Technology, Helsinki School of Economics and the University of Art and Design Helsinki. The University?s cornerstones are its strengths in education and research, with 20,000 basic degree and graduate students. In addition to a decent salary, the contract includes occupational health benefits, and Finland has a comprehensive social security system. The Helsinki Metropolitan area forms a world-class information technology hub, attracting leading scientists and researchers in various fields of ICT and related disciplines. Moreover, as the birthplace of Linux, and the home base of Nokia/Alcatel-Lucent/Bell Labs, F-Secure, Rovio, Supercell, Slush (the biggest annual startup event in Europe) and numerous other technologies and innovations, Helsinki is fast becoming one of the leading technology startup hubs in Europe. See more e.g. athttp://www.investinfinland.fi/.

As a living and working environment, Finland consistently ranks high in quality of life, and Helsinki, the capital of Finland, is regularly ranked as one of the most livable cities in the world. See more athttps://finland.fi and http://www.helsinkitimes.fi/finland/finland-news/domestic/14966-helsinki-ranked-again-as-world-s-9th-most-liveable-city.htmlHome page of the group: http://spa.aalto.fi/en/research/research_groups/speech_recognition/
Back  Top

6-22(2017-10-25) Speech scientist at ELSA, Lisbon, Portugal

Speech scientist at ELSA

Location: Lisbon, Portugal

contact: people@elsaspeak.com

Job Description

We are looking for an splendid Speech Scientist to join our team al ELSA and help us in our mission to help every language learner speak a foreign language fluently and confidently.

As a speech scientist al ELSA you will be playing with state-of-the-art machine learning and applying them to tons of audio and text data, towards building new technology and improving existing algorithms.

At ELSA you will not do your job alone. Solving a problem includes working with a team (backend, speech scientists, product managers, designers) to design solutions that would impact hundreds of thousands of users. We are an agile team. We are passionate engineers. We are highly collaborative. We value results. We go above and beyond to help our users and to deliver superb products, but we also value work-life balance - we are motorcycle riders, mountain climbers, food-enthusiasts, espresso lovers, yoga students, proud parents.

With more than 1.5M app downloads, your research will be directly influenced and will influence the many thousands of users that use our app daily.

Your role

  • Investigate and implement new technology in speech and natural language processing.

  • Postprocess, organize and put to use the myriad of data collected daily from the usage of the ELSA app.

  • Propose new features and investigate their technical feasibility at scale, in a production environment. Work with the CTO and development engineers to turn research into reality.

  • Work with the rest of the speech scientists to define the future of the speech and NLP technology used at ELSA.

  • Maintain and improve the current speech and NLP technology stack following state-or-the-art developments and availability of new training data.

Requirements

  • PhD in speech technology or related area (Masters level + related experience will also be considered).

  • Proven research and publications track record that shows you can think on your own and come up with novel approaches to state-of-the-art problems.

  • Outstanding coding skills. You should be able to program in low level languages like C++, high level prototyping using Python, Matlab, or similar; and be able to glue all them together using bash scripts.

  • Enthusiasm and readiness to dig into production results to correct corner cases of your algorithms and iterate to better technology.

  • Acquainted with deep learning open source packages (Theano, tensor flow, Caffe, Torch,...) and experience using Kaldi ASR.

What we offer

  • Competitive salary and stock options according to seniority

  • Company laptop

  • Flexible working hours

  • Ample room to grow professionally

  • Experience the true startup spirit of a fast growing and well funded Silicon Valley startup

Application

Send your  LinkedIn profile or your CV to people@elsaspeak.com and we will get in touch.

About us

 

ELSA (English Language Speech Assistant) Corp. is a San Francisco-based startup with engineering offices in Lisbon. Our vision is to enable everyone to speak foreign languages with full confidence, reaching better life and career opportunities. Our flagship product, ELSA speak, is a personal mobile coach who improves our users' English pronunciation and intonation using phoneme and suprasegmental analysis of the user's speech signal. Our backend servers implement state-of-the-art speech recognition technology to pinpoint errors and give accurate and consistent feedback to our users on how to improve.

Back  Top

6-23(2017-10-25) Senior Speech scientist at ELSA

Senior speech scientist at ELSA

Location: flexible, where would you like to live?

contact: people@elsaspeak.com

Job description


We are looking for an splendid Senior Speech Scientist to join our team al ELSA and help us in our mission to help every language learner speak a foreign language fluently and confidently.


As a Senior Scientist at ELSA you will be responsible for the evolution of the speech technology powering our language assessment services. Your efforts will translate directly to new voice-enabled features and better assessment quality for hundreds of thousands of language learners that use ELSA app. In addition, you will work directly with the CTO to define and drive the scientific roadmap of the company.


At ELSA you will not do your job alone.  Solving a problem includes working with a team (backend, speech scientists, product managers, designers) to design solutions that would impact hundreds of thousands of users. We are an agile team. We are passionate engineers. We are highly collaborative. We value results. We go above and beyond to help our users and to deliver superb products, but we also value work-life balance - we are motorcycle riders, mountain climbers, food-enthusiasts, espresso lovers, yoga students, proud parents.


Join our world-class team and be one of the personalities that make the ELSA culture awesome.

Requirements:

  • PhD in speech technology or very related area

  • 5+ years of experience post-PhD

  • Expert in acoustic modeling in English

  • Leadership experience leading junior researchers and students


Bonus skills

  • Experience bringing products to market

  • Knowledge of Kaldi-ASR package at API level

  • Experience in CALL (Computer Assisted Language Learning) technology

  • Good programming skills in Python, C++ and shell hacking

  • Experience training multilingual ASR systems and working with languages other than English



What we offer

  • Flexible working location

  • Competitive salary and stock options according to seniority

  • Company laptop

  • Flexible working hours

  • Ample room to grow professionally

  • Experience the true startup spirit of a fast growing and well funded Silicon Valley startup

Application

Send your an email with your LinkedIn profile or your CV to people@elsaspeak.com and we will get back to you.

About us

ELSA Corp. is a US (San Francisco) startup with engineering offices in Lisbon. Our vision is to enable everyone to speak foreign languages with full confidence, reaching better life and career opportunities, powered by our proprietary speech recognition technology using deep learning. Our flagship product, ELSA speak, is a personal mobile coach who improves our users' English pronunciation and intonation using phonetic and suprasegmental analysis of the user's speech. Our backend servers implement state-of-the-art speech assessment technology to pinpoint the user?s most outstanding errors and give accurate and consistent feedback on how to solve them.

Back  Top

6-24(2017-10-26) Two post-doctoral researchers at Idiap Research Institute, Martigny, Switzerland

Openings for two post-doctoral researchers at Idiap Research Institute.
 Both involve the theory and application of deep learning to bring speech processing to
home devices.

 http://www.idiap.ch/education-and-jobs/job-10232

Idiap is located in French speaking Switzerland, although the lab hosts many
nationalities, and functions in English.  All positions offer quite generous salaries.

The positions involve collaboration with a commercial partner located nearby. However,
they are funded by a Swiss federal grant, so a significant research element is expected.

Several similar positions at PhD, post-doc and senior level are available at the
institute in general.

 http://www.idiap.ch/en/join-us/job-opportunities

Back  Top

6-25(2017-11-02) Postdoctoral research position: Acoustic cough detection and processing for healthcare, University of Stellenbosch, South Africa.

November 2017


Postdoctoral research position: Acoustic cough detection and processing for healthcare
A postdoc position focussing on the automatic detection, analysis and classification of coughing in unconstrained audio for healthcare monitoring and disease screening is available in the Digital Signal Processing Group of the Department of Electrical and Electronic Engineering at the University of Stellenbosch, South Africa.
The project will involve the development of machine learning algorithms that are able to automatically distinguish and characterise coughing in a noisy environment, with an emphasis on the monitoring of tuberculosis. The project will also include the compilation of data corpus and collaboration with medical practitioners.
Specific project objectives include the gathering of the acoustic data, setting up and managing the annotation process, developing automatic detection and classification systems  using the gathered data, and producing new and original research into how best to automatically detect and classify coughing in a difficult acoustic environment.
Applicants must hold a PhD (preferably obtained within the last 5 years) in the field of Electronic/Electrical Engineering, Information Engineering, Computer Science, or other relevant discipline. Suitable candidates must also have practical and research experience in a relevant machine learning specialisation such as automatic speech or speaker recognition or sound event detection. The candidate should have an excellent background in statistical modelling, signal processing, and/or  audio analysis. Applicants should also have proven prior experience in data compilation,  have good programming skills and be able to use high level programming languages for developing prototype systems. Finally, candidates must have excellent English writing skills and have an explicit interest in scientific research and publication.
The position will be available for one year, with a possible extension to a second year, depending on progress and available funds.  
Applications should include a covering letter, curriculum vitae, list of publications, research projects, conference participation and details of three contactable referees and should be sent as soon as possible to: Prof Thomas Niesler, Department of Electrical and Electronic Engineering, University of Stellenbosch, Private Bag X1, Matieland 7602. Applications can also be sent by email to: trn@sun.ac.za. The successful applicant will be subject to University policies and procedures.
Interested applicants are welcome to contact me at the above e-mail address for further information regarding the project.

Back  Top

6-26(2017- 11-02) Postdoctoral research position: Extremely-low-resource radio browsing for humanitarian monitoring in rural Africa, University of Stellenbosch, South Africa

Postdoctoral research position: Extremely-low-resource radio browsing for humanitarian monitoring in rural Africa


 A postdoc position focussing on automatic identification of spoken keywords in multilingual environments with extremely few or even no resources using deep neural architectures is available in the Digital Signal Processing Group of the Department of Electrical and Electronic Engineering at the University of Stellenbosch. The project will develop wordspotters that can be used to monitor community radio broadcasts in rural African regions as a source of early warning information during natural disasters, disease outbreaks, or other crises. Specific project objectives include the development of a research system, the production of associated publishable outputs. The position is part of a collaborative project with the United Nations Global Pulse. Further information is available on the web at http://pulselabkampala.ug/. Applicants should hold a PhD (obtained within the last 5 years) in the field of Electronic/Electrical Engineering, Information Engineering, or Computer Science, or other relevant disciplines. Suitable candidates must have practical experience with automatic speech recognition systems in general and deep neural net architectures in particular, and should have an excellent background in statistical modelling and machine learning. The candidate must also have good programming skills and be able to use high level programming languages for developing prototype systems. Finally, candidates must have excellent English writing skills and have an explicit interest in scientific research and publication.
The position will be available for one year, with a possible extension to a second year, depending on progress and available funds.  
Applications should include a covering letter, curriculum vitae, list of publications, research projects, conference participation and details of three contactable referees and should be sent as soon as possible to: Prof Thomas Niesler, Department of Electrical and Electronic Engineering, University of Stellenbosch, Private Bag X1, Matieland 7602. Applications can also be sent by email to: trn@sun.ac.za. The successful applicant will be subject to University policies and procedures.
Interested applicants are welcome to contact me at the above e-mail address for further information regarding the project.

Back  Top

6-27(2017-11-05) 15 PhD positions from mid-2018

Le réseau de formation sur le traitement automatique de la parole pathologique (TAPAS) est un projet H2020 MSCA-ITN-ETN qui fournira à 15 doctorants une formation large et intensive sur le traitement de la parole pathologique. Le consortium du projet TAPAS comprend des praticiens cliniques, des chercheurs universitaires et des partenaires industriels, avec une expertise couvrant l'ingénierie de la parole, la linguistique et la science clinique. Le programme de travail TAPAS est organisé autour de trois grands thèmes:

- Détection de la parole pathologique (4 thèses)


- Évaluation de la parole pathologique et thérapie (6 thèses)


- Technologies de communication pour la vie assistée et la réadaptation (5 thèses)


Le consortium TAPAS comprend:

-Idiap Research Institute (CH)
-Friedrich-Alexander-Univeristaet (DE)
-Interuniversitair Micro-Electronicacentrum IMEC VZW (BE)
-INESC ID - Instituto de Engenhariade Sistemas E Computabores Investigacao E Desenvolvimento em Lisboa (PT)
-Ludwig-Maximilians-Universitaet Muenchen (DE),
-Stichting Het Nederlands Kanker Instituut-Antoni Van Leeuwenhoek Ziekenhuis (NL), 
-Philips Electronics Nederland B.V. (NL), 
-Stichting Katholieke Universiteit (NL)
-Universität Augsburg (DE), 
-Université Toulouse III-Paul Sabatier (FR), 
-The University of Sheffield United Kingdom (UK), 
-Universitair Ziekenhuis Antwerpen (BE)

Pour plus d?informations et pour candidater, veuillez vous référer au site http://www.tapas-etn-eu.org 


Bien cordialement,

Julie Mauclair
MCF  IRI
Université Paris Descartes

 
Back  Top

6-28(2017-11-08) Visiting Assistant Professor in Computational Linguistics and Language Science, Rochester, NY, USA

Visiting Assistant Professor in Computational Linguistics and Language Science

URL:

http://apptrkr.com/1116774


 Requisition Number: 3499BR

 

Detailed Job Description:

The Department of English invites applications for a Visiting Assistant Professor position, beginning in January 2018, with specialization in computational linguistics and/or innovative technical or scientific methods in language science at Rochester Institute of Technology (RIT), with a focus on one or more areas of application. Possible areas include:

·  Deep learning for natural language understanding

·  Speech and speech technology

·  Multimodal and linguistic sensors

·  Human-computer interaction

·  Linguistic narrative analytics

The applicant should demonstrate a fit with our commitment to collaborate with colleagues across the university on initiatives in artificial intelligence and in digital humanities and social sciences. The position has the possibility of extension beyond Spring 2018.

The successful applicant will be a researcher and teacher with an agenda that emphasizes innovative technical methods in linguistics, for instance in natural language processing, linguistic/multimodal sensors, speech and speech technology, and/or other computational or technical approaches applied to language data. We are seeking a scholar who engages in disciplinary and interdisciplinary teamwork, student mentoring, and has a coherent plan for grant seeking activities. The right candidate will contribute to advancing our interdisciplinary language science curriculum in a college of liberal arts at a technical university. Contributions that build students' global education experiences are additionally valued.

The teaching assignment may be Introduction to Language Science, Language Technology, Introduction to NLP, Science and Analytics of Speech (acoustic and experimental phonetics), Spoken Language Processing (automatic speech recognition and text-to-speech synthesis), Seminar in Computational Linguistics, self-designed courses, or another course depending on background.

We are seeking an individual who has the ability and interest in contributing to a community committed to student-centeredness; professional development and scholarship; integrity and ethics; respect, diversity and pluralism; innovation and flexibility; and teamwork and collaboration. Select to view links to RIT's core values, honor code, and statement of diversity.

Department Description:

THE UNIVERSITY AND ROCHESTER COMMUNITY:
RIT is a national leader in professional and career-oriented education. Talented, ambitious, and creative students of all cultures and backgrounds from all 50 states and more than 100 countries have chosen to attend RIT. Founded in 1829, Rochester Institute of Technology is a privately endowed, coeducational university with nine colleges emphasizing career education and experiential learning. With approximately 15,000 undergraduates and 2,900 graduate students, RIT is one of the largest private universities in the nation. RIT offers a rich array of degree programs in engineering, science, business, and the arts, and is home to the National Technical Institute for the Deaf. RIT has been honored by The Chronicle of Higher Education as one of the ?Great Colleges to Work For? for four years. RIT is a National Science Foundation ADVANCE Institutional Transformation site. RIT is responsive to the needs of dual-career couples by our membership in the Upstate NY HERC.

Rochester, situated between Lake Ontario and the Finger Lakes region, is the 51st largest metro area in the United States and the third largest city in New York State. The Greater Rochester region, which is home to nearly 1.1 million people, is rich in cultural and ethnic diversity, with a population comprised of approximately 18% African and Latin Americans and another 3% of international origin. It is also home to one of the largest deaf communities per capita in the U.S. Rochester ranks 4th for ?Most Affordable City' by Forbes Magazine, and MSN selected Rochester as the ?#1 Most Livable Bargain Market? (for real-estate). Kiplinger named Rochester one of the top five ?Best City for Families.?

Job Requirements:

·  Ph.D. with training in Computational Linguistics, Linguistics, or an allied field for language science, in hand prior to appointment date.

·  Advanced graduate coursework in computational linguistics, including natural language and/or spoken language processing or technical methods in linguistics.

·  Publication record and coherent plan for research and grant seeking activities.

·  Evidence of outstanding teaching.

·  Ability to contribute in meaningful ways to the college's continuing commitment to cultural diversity, pluralism, and individual differences.

How to Apply:

Apply online at  http://apptrkr.com/1116774. Please submit your online application, curriculum vitae, cover letter addressing the listed qualifications and upload the following attachments:


·  A research statement

·  A teaching statement

·  Copy of transcripts of graduate coursework

·  A sample publication 

·  The names, addresses, and phone numbers for three references
·  Statement of diversity

 

Questions regarding this position can be directed to the search committee chair-Dr. Cecilia Ovesdotter Alm at coagla@rit.edu.


Review of applications will begin on November 25, 2017 and will continue until an acceptable candidate is found.

Back  Top

6-29(2017-11-10) Principal Speech Recognition Engineer, Speechmatics, Cambridge, UK

Principal Speech Recognition Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a talented and experienced speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player, who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Delivering the artefacts comprising high quality speech recognition software products

  • Keeping us ahead of the rest of the world in terms of speech recognition capabilities

  • Transferring knowledge to the wider team and company

 

 

Experience

Essential

  • Proven track record as one of the best in the world at modern LVCSR

  • Extensive practical experience of speech recognition, covering all aspects (acoustic, pronunciation and language modelling as well as decoders / search)

  • Experience working with standard speech and ML toolkits, e.t., Kaldi, KenLM, TensorFlow, etc.

  • Solid Python programming skills

  • Experience using Unix / Linux systems

  • Proven ability to effectively communicate highly technical subjects

 

Desirable

  • Expertise in all aspects of modern speech recognition, including WFSTs, lattice processing, neural nets (RNN / DNN / LSTM etc.), etc.

  • Knowledge of computational linguistics.

  • Deep production-grade software development experience, especially with Python, C/C++ and / or Go.

  • Experience working effectively with software engineering teams or as a Software Engineer.

  • Experience of team leadership and line management

  • Experience working in an Agile framework





Salary

We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-30(2017-11-10) Principal Language Modelling Engineer, Speechmatics, Cambridge, UK

Principal Language Modelling Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a talented and experienced Language Modelling expert to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is working on our core ASR capabilities to improve our speed, accuracy and support for all languages. Your role will include making sure our Language Modelling capability remains at the head of the field. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models, and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Analysing advances in the field of ASR – especially language modelling – and reporting back on what is the latest and greatest

  • Ensuring we can implement the best ASR technology in a production environment

  • Leading the language modelling extension of our ML framework

  • Being part of a team delivering all the artefacts required to make up the best speech recognition available to our customers

 

Experience

Essential

  • Proven track record as one of the best in the world at modern language modelling techniques for LVCSR

  • Proven ability to effectively communicate highly technical subjects



Desirable

  • MSc, PhD or equivalent qualification in the academic aspects of speech recognition

  • Extensive experience working with standard language modelling and ML toolkits, e.g. pocolm, KenLM, SRILM, TensorFlow, etc.

  • Expertise in all aspects of modern speech recognition, including WFSTs, lattice processing, neural net (RNN / DNN / LSTM), acoustic and language models, Viterbi decoding

  • Experience translating academic advances in ASR into production systems

  • Comprehensive knowledge of machine learning and statistical modelling

  • Expertise in Python and/or C++ software development

  • Experience working effectively with software engineering teams or as a Software Engineer

  • Experience of technical leadership of a team / teams

  • Experience of team leadership and line management

  • Experience of working in an Agile framework



Salary

We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-31(2017-11-10) Principal Acoustic Modelling Engineer, Speechmatics, Cambridge,UK

Principal Acoustic Modelling Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a talented and experienced Acoustic Modelling expert to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is working on our core ASR capabilities to improve our speed, accuracy and support for all languages. Your role will include making sure our Acoustic Modelling capability remains at the head of the field. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models, and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Analysing advances in the field of ASR – especially acoustic modelling – and reporting back on what is the latest and greatest

  • Ensuring we can implement the best ASR technology in a production environment

  • Leading the acoustic modelling extension of our ML framework

  • Being part of a team delivering all the artefacts required to make up the best speech recognition available to our customers

 

Experience

Essential

  • Proven track record as one of the best in the world at modern language modelling techniques for LVCSR

  • Proven ability to effectively communicate highly technical subjects



Desirable

  • MSc, PhD or equivalent qualification in the academic aspects of speech recognition

  • Extensive experience working with standard acoustic modelling and ML toolkits, e.g. Kaldi, TensorFlow, etc.

  • Expertise in all aspects of modern speech recognition, including WFSTs, lattice processing, neural net (RNN / DNN / LSTM), acoustic and language models, Viterbi decoding

  • Experience translating academic advances in ASR into production systems

  • Comprehensive knowledge of machine learning and statistical modelling

  • Expertise in Python and/or C++ software development

  • Experience working effectively with software engineering teams or as a Software Engineer

  • Experience of technical leadership of a team / teams

  • Experience of team leadership and line management

  • Experience of working in an Agile framework



Salary

We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-32(2017-11-10) Speech Recognition Intern, Speechmatics, Cambridge, UK

Speech Recognition Intern

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we’re not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a bright, enthusiastic and talented speech recognition intern to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Delivering high quality speech recognition products

  • Keeping us ahead of the rest of the world in terms of speech recognition capabilities

  • Learning!

 

 



Experience

Essential

  • Interest in speech recognition

  • Experience using Python

  • Experience using Unix / Linux systems

  • A team player willing to contribute to all sprint activities

 

Desirable

  • Experience in Speech recognition or related fields

  • Experience working with standard speech and ML toolkits, e.g. Kaldi, KenLM, TensorFlow, etc.

  • Knowledge of computational linguistics.

  • Knowledge of modern software development practices



Salary

This will be a paid internship. We also have several additional benefits including massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-33(2017-11-10) Speech Recognition Engineer, Speechmatics, Cambridge, UK

Speech Recognition Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a talented speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Delivering high quality speech recognition products

  • Keeping us ahead of the rest of the world in terms of speech recognition capabilities

 

Experience

Essential

  • Practical experience of speech recognition or a related field with crossover knowledge

  • Interest in speech recognition

  • Experience using Python

  • Experience using Unix / Linux systems

  • A team player willing to contribute to all sprint activities

 

Desirable

  • Experience in modern speech recognition, such as WFSTs, lattice processing, neural nets (RNN / DNN / LSTM etc.), acoustic and language modelling, etc.

  • Experience working with standard speech and ML toolkits, e.g. Kaldi, KenLM, TensorFlow, etc.

  • Knowledge of computational linguistics.

  • Production-grade software development experience, especially with Python, C/C++ and / or Go.

  • Experience working effectively with software engineering teams or as a Software Engineer.

  • Experience working in an Agile framework



Salary

We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-34(2017-11-10) Senior Speech Recognition Engineer, Speechmatics, Cambridge,UK

Senior Speech Recognition Engineer

Location: Cambridge, UK

Contact: careers@speechmatics.com

Background

Speechmatics’ versatile automatic speech recognition technology, based on decades of research and experience in neural networks, is enabling world-leading companies to power a speech-enabled future. Having already transcribed millions of hours of audio and helped customers across a diverse range of use cases and applications, the team’s mission is to build the best speech technology for any application, anywhere, in any language and put speech back at the heart of communication.

In the office, we pride ourselves on a relaxed but productive environment enabling both commercial success and personal development - we often host lunch and learn sessions and attend regular academic and commercial conferences. When we're not working hard, we regularly host company outings and events where your plus-one is welcomed to enjoy great food, great drinks, and great company! We also reward ourselves occasionally with massages, and even get our bikes fixed onsite!

We think it’s important to give a little back too, so everyone is eligible for some time off for charity work plus we’ll match your contribution via the Give As You Earn scheme. See more about our great perks below!

We are expanding rapidly and are seeking talented people to join us as we continue to push the boundaries of speech recognition. This is an opportunity to join a high growth team and form a major part of its future direction.



The Opportunity

We are looking for a talented and experienced speech recognition engineer to help us build the best speech technology for anybody, anywhere, in any language. You will be a part of a team that is building languages packs and developing our core ASR capabilities including improving our speed, accuracy and support for all languages. Your work will feed into the ‘Automatic Linguist’, our ground-breaking framework to support the building of ASR models and hence the delivery of every language pack published by the company. Alongside the wider team you will be responsible for keeping our system the most accurate and useful commercial speech recognition available.

Because you will be joining a rapidly expanding team, you will need to be a team player who thrives in a fast paced environment, happy to pick up whatever needs to be done, with a focus on rapidly moving research developments into products. We strongly encourage versatility and knowledge transfer within the team, so we can share efficiently what needs to be done to meet our commitments to the rest of the company.



Key Responsibilities

  • Delivering the artefacts comprising high quality speech recognition software products

  • Keeping us ahead of the rest of the world in terms of speech recognition capabilities

  • Transferring knowledge to the wider team and company

 

 

Experience

Essential

  • Practical experience of speech recognition, covering all aspects (acoustic, pronunciation and language modelling as well as decoders / search)

  • Experience working with standard speech and ML toolkits, e.g., Kaldi, KenLM, TensorFlow, etc.

  • Solid Python programming skills

  • Experience using Unix / Linux systems

  • A team player willing to contribute to all sprint activities

  • Ability to effectively communicate highly technical subjects

 

Desirable

  • Expertise in all aspects of modern speech recognition, including WFSTs, lattice processing, neural nets (RNN / DNN / LSTM etc.), etc.

  • Knowledge of computational linguistics.

  • Deep production-grade software development experience, especially with Python, C/C++ and / or Go.

  • Experience working effectively with software engineering teams or as a Software Engineer.

  • Experience of team leadership and line management

  • Experience working in an Agile framework





Salary

We offer a competitive salary and bonus scheme, pension contribution matching and a generous EMI share option scheme. We also have several additional benefits including private medical insurance, holiday purchase, life assurance, childcare vouchers, cycle scheme, massages, bike doctor, fully stocked drinks fridge, and fresh fruit available daily to name just a few!

Back  Top

6-35(2017-11-12) Language Resources Project Manager - Junior (m/f), ELDA, Paris, France

nceThe European Language resources Distribution Agency (ELDA), a company specialized in Human Language Technologies within an international context is currently seeking to fill an immediate vacancy for a Language Resources Project Manager ? Junior position. This yields excellent opportunities for young, creative, and motivated candidates wishing to participate actively to the Language Engineering field.

Language Resources Project Manager - Junior (m/f)

Under the supervision of the Language Resources Manager, the Language Resources Project Manager ? Junior will be in charge of the identification of Language Resources (LRs), the negotiation of rights in relation with their distribution, as well as the data preparation, documentation and curation.

 The position includes, but is not limited to, the responsibility of the following tasks:

  • Identification of LRs and Cataloguing
  • Negotiation of distribution rights, including interaction with LR providers, drafting of distribution agreements, definition of prices of language resources to be integrated in the ELRA catalogue or for research projects
  • LR Packaging within production projects
  • Data preparation, documentation and curation

 Profile:

  • PhD in computational linguistics or similar fields
  • Experience in managing NLP tools
  • Experience in project management and participation in European projects, as well as practice in contract and partnership negotiation at an international level, would be a plus
  • Good knowledge of script programming (Perl, Python or other languages)
  • Good knowledge of Linux
  • Dynamic and communicative, flexible to combine and work on different tasks
  • Ability to work independently and as part of a team
  • Proficiency in English, with strong writing and documentation skills. Communication skills required in a French-speaking working environment
  • Citizenship of (or residency papers) a European Union country

 

All positions are based in Paris. Applications will be considered until the position is filled.

Salary is commensurate with qualifications and experience.
Applicants should email a cover letter addressing the points listed above together with a curriculum vitae to:

ELDA
9, rue des Cordelières
75013 Paris
FRANCE
Fax : 01 43 13 33 30
Mail job@elda.org

ELDA is acting as the distribution agency of the European Language Resources Association (ELRA). ELRA was established in February 1995, with the support of the European Commission, to promote the development and exploitation of Language Resources (LRs). Language Resources include all data necessary for language engineering, such as monolingual and multilingual lexica, text corpora, speech databases and terminology. The role of this non-profit membership Association is to promote the production of LRs, to collect and to validate them and, foremost, make them available to users. The association also gathers information on market needs and trends.

For further information about ELDA and ELRA, visit:
http://www.elra.info

Back  Top

6-36(2017-11-15) PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech), University of Trento, Italy

Title: 2018 PHD RESEARCH FELLOWSHIPS ( ML/Dialogue/Language/Speech)
Location: University of Trento , Italy

You may have enjoyed reading about bots, artificial intelligence, machine learning,
digital assistants, systems that support doctors, teachers, customers and help people.
Then, you would like to consider taking a front row seat and join the research team
that has been training intelligent machines and evaluating AI-based systems
for more than two decades, collaborating with best research labs in the world and
deployed them in the real-world.

Here is a sample of the projects ( http://sisl.disi.unitn.it/demo/ ) the Signals
and Interactive Systems Lab (University of Trento, Italy) has been leading:

-Natural Language Understanding systems for massive amount of human language data:
http://www.sensei-conversation.eu

-Amazon Alexa challenge on Conversational Systems:
http://sisl.disi.unitn.it/university-of-trento-is-selected-by-amazon-for-the-alexa-challenge/

-Designing AI personal agents for healthcare domain:
http://sisl.disi.unitn.it/pha/

We are looking for top-candidates for its funded PhD research fellowships.
Candidates should have background at least in one of the following areas:


- Speech Processing

- Natural Language Understanding

- Conversational Systems

- Machine Learning

Candidates will be working on research domains such as Conversational Agents,
Intelligent Systems, Speech/Text Document Mining and Summarization,
Human Behavior Understanding, Crowd Computing and AI-based systems for tutoring.


For more info on research and projects visit the lab website
Visit lab website at http://sisl.disi.unitn.it/

The SIS Lab research is driven by an interdisciplinary approach to research,
attracting researchers from  disciplines such as Digital Signal Processing,
Speech Processing, Computational Linguistics, Psychology, Neuroscience and
Machine Learning.

The  official language ( research and teaching ) of the department is English.

FELLOWSHIP

Gross amount of the fellowships ( internship and PhD ) is competitive and approximately
1.600 Euro/month.
Students may qualify for reduced campus lodging, transportation and cafeteria
reduced rates.

For more information about cost of living, campus, graduate education programs,
please visit the graduate school website at http://ict.unitn.it/

DEADLINES

Immediate openings with start date as early as March 2018.  Open until filled.

REQUIREMENTS

Strict requirement is at least Master level degree in Computer Science, Electrical
Engineering, Computational Linguistics or similar or affine disciplines. Students with
other background
(Physics, Applied Math) may apply as well. Background in at least one of the posted
research areas is required. All applicants should have good very programming, math skills
and used to team work.

HOW TO APPLY

Interested applicants should send their
1) CV
2) Statement of research interest and
3) three reference letters sent to:

Email: sisl-jobs@disi.unitn.it


For more info:

Signals and Interactive Systems Lab : http://sisl.disi.unitn.it/

PhD School : http://ict.unitn.it/

Department : http://disi.unitn.it/


Information Engineering and Computer Science Department (DISI)

DISI has a strong focus on cross-disciplinarity with professors from different
faculties of the University (Physical Science, Electrical Engineering, Economics,
Social Science, Cognitive Science, Computer Science) with international
background. DISI aims at exploiting the complementary experiences present in the
various research areas in order to develop innovative methods, technologies and
applications.

University of Trento

The University of Trento is consistently ranked as premiere Italian university
institution.
See http://www.unitn.it/en/node/1636/mid/2573

University of Trento is an equal opportunity employer.

Back  Top

6-37(2017-11-20) Audio Signal Processing Maverick (Applied Research), AVA, Paris, France

Audio Signal Processing Maverick (Applied Research)

 

 

ASR research is sadly mostly largely done in big companies. Sure, talent is around, but it's really only in a less crowded space that you can really shine and see the impact that you can do. A.K.A. an early-stage startup, when you’re still a group of friends with a crazy ambition to change the world.

Here’s what drives us nuts: Products where ASR is really the key component are 99% of the time made for: *answering quick/superficial requests from lazy users (Siri, Google Now, Cortana, Echo) *dealing with angry customers on the phone (all the IVRs) *dictating emails for busy people (Nuance)

What if you could truly change 400M lives instead? Turn a lifetime of frustration into a deep connection?

Ava aims at captioning the world to make it fully accessible, 24/7, to deaf & hard-of-hearing people. Mobile-first, the app is the fastest & most advanced captioning system in the world, beating what tech giants have done, by cleverly using speech and speaker identification technologies to make conversations between deaf & hard-of-hearing people and hearing people possible.

At Ava, the CEO is the only hearing person in a family of deaf people, and the CTO is deaf and non-speaking - both were Forbes 30 under 30 2017. We use our ASR-based product everyday to communicate. Our motivations are aligned with the change we want to make in the world. We care about the millions of people out there who struggle everyday to just have a social & professional life and that YOUR tech will help. If it wasn't for Ava, the next best solution would take 10X time (it's not a solution) or 100X cost (it's not accessible to all). We’re working with companies such as GE, Nike, Salesforce, but also universities, stores, and even churches to fulfill our mission to make the world truly accessible.

What we need to get to the next level? You - someone with prior research experience in audio signal processing. The core mission will be to enhance a speech recognition system used in real world cocktail party situations. The signal is acquired via an array of ad-hoc microphones, and is processed to optimize its quality for the transcription, using a set of techniques: source localization, Time Difference of Arrival, noise cancelling, source separation… all in real time.

Interested to learn more about it? Let’s chat.

Especially if:

● You just finished a PhD in audio signal processing.

● You’re ready to be a pioneer in the field, and do what is necessary to make things work in real world situations.

● You're of the persistent, yet open-minded and collaborative type: you reason by independent thinking first, but you know that together, we're stronger. What you get

● Early-stage -> massive equity opportunity.

● An opportunity to apply cutting-edge technologies to solve real world problems, right now

● Competitive salary The job will be based in our Paris office.

 

 

Interested? Let us know at alex@ava.me

Back  Top

6-38(2017-11-20) Speaker Identification Maverick (Applied Research) , Ava, Paris, France

Speaker Identification Maverick (Applied Research) 

 ASR research is sadly mostly largely done in big companies. Sure, talent is around, but it's really only in a less crowded space that you can really shine and see the impact that you can do. A.K.A. an early-stage startup, when you’re still a group of friends with a crazy ambition to change the world. 
 Here’s what drives us nuts: Products where ASR is really the key component are 99% of the time made for: *answering quick/superficial requests from lazy users (Siri, Google Now, Cortana, Echo) *dealing with angry customers on the phone (all the IVRs) *dictating emails for busy people (Nuance)
 What if you could truly change 400M lives instead? Turn a lifetime of frustration into a deep connection?
 Ava aims at captioning the world to make it fully accessible, 24/7, to deaf & hard-of-hearing people. Mobile-first, the app is the fastest & most advanced captioning system in the world, beating what tech giants have done, by cleverly using speech and speaker identification technologies to make conversations between deaf & hard-of-hearing people and hearing people possible.
 At Ava, the CEO is the only hearing person in a family of deaf people, and the CTO is deaf and non-speaking - both were Forbes 30 under 30 2017. We use our ASR-based product everyday to communicate. Our motivations are aligned with the change we want to make in the world. We care about the millions of people out there who struggle everyday to just have a social & professional life and that YOUR tech will help. If it wasn't for Ava, the next best solution would take 10X time (it's not a solution) or 100X cost (it's not accessible to all). We’re working with companies such as GE, Nike, Salesforce, but also universities, stores, and even churches to fulfill our mission to make the world truly accessible.
 What we need to get to the next level? You - someone with prior research exposure to speaker identification (deep learning interest/experience is a big plus). The core of your mission will be to reinvent what voice recognition can do to understand real world conversations: crack the cocktail-party problem.
 Interested to learn more about it? Let’s chat.
Especially if:

● You just finished a PhD in Machine Learning.

● Experience in Speaker Identification, ASR, NLP, acoustic modeling, language models or source separation is a plus.

● You ambition to be a pioneer in the field, and do what is necessary to make things work in real world situations.

● You're of the persistent, yet open-minded and collaborative type: you reason by independent thinking first, but you know that together, we're stronger.

 

What we offer

● Early-stage -> massive equity opportunity.

● An opportunity to apply cutting-edge technologies to solve real world problems, right now

● Competitive salary The job will be based in our Paris office.

 

Interested? Let us know at alex@ava.me.

Back  Top

6-39(2017-11-21) PhD position in Opinion Analysis in human-agent interactions, Telecom ParisTech, Paris France

PhD position in Opinion Analysis in human-agent interactions

 

 

Telecom ParisTech [1]

46 rue Barrault  75013 Paris ? France


Starting date: from Now to Early Autumn 2018

Possibility to start with an internship during first semester 2018.

 

Duration of the PhD funding: 36 months

  

*Position description* 

 

The PhD student will take part in the ANR JCJC MAOI (Multimodal Analysis of Opinions in Interactions) at Telecom-ParisTech. He/She will tackle the following challenging issue: the integration of opinion mining methods in human-agent interactions (i.e. companion robots or virtual vocal assistants such as Siri, Google Now, Cortana, etc.)

The role of the PhD will consist in developing machine learning methods for the multimodal (i.e. speech and text) analysis of the user?s opinion during his/her interaction with an agent. The main challenge will be to integrate the interaction context in machine-learning opinion detection methods. 

The work will include:

- the development of machine learning/deep learning approaches (Conditional Random Fields, Long-Short-Term-Memory networks)

- the integration of complex and interactional linguistic features in machine-learning models for the detection of opinions in interactions

- the integration of acoustic features in multimodal models

- the evaluation of the system in interaction context.

The PhD will join the Social Computing topic [2] in the S2a group [3] at Telecom-ParisTech.

Selected references for this position from [4] :

Barriere, V., Clavel, C., and Essid, E. (2017). Opinion dynamics modeling for movie review transcripts classification with hidden conditional random fields. Interspeech.

Clavel, C.; Callejas, Z., Sentiment analysis: from opinion mining to human-agent interaction, Affective Computing, IEEE Transactions on, 7.1 (2016): 74-93

Langlet, C. and Clavel, C. (2015). Improving social relationships in face-to-face human-agent interactions: when the agent wants to know user?s likes and dislikes. In ACL, Beijin, China.

Langlet, C. and Clavel, C. (2016). Grounding the detection of the user?s likes and dislikes on the topic structure of human-agent interactions. Knowledge-Based Systems.


* Candidate profile*

 

As a minimum requirement, the successful candidate will have:

 

?    A master degree or equivalent in one or more of the following areas: machine

learning, natural language processing, affective computing

?    Excellent programming skills (preferably in Python)

?    Good command of English

 

The ideal candidate will also (optionally) have:

?    Knowledge in natural language processing

?    Knowledge in probabilistic graphical models and deep learning

 

-- More about the position

?    Place of work: Paris, France

?    For more information about Telecom ParisTech see [1]

 

-- How to apply

Applications are to be sent to Chloé Clavel [4]

 

The application should be formatted as a single pdf file and should include:

?    A complete and detailed curriculum vitae

?    A letter of motivation

?    The transcript of grades  

?    The names and addresses of two referees

 

[1] https://www.telecom-paristech.fr/eng/  

[2] https://www.tsi.telecom-paristech.fr/recherche/themes-de-recherche/analyse-automatique-des-donnees-sociales-social-computing/ 

[3] http://www.tsi.telecom-paristech.fr/ssa/# 

[4] https://clavel.wp.imt.fr/publications/

Back  Top

6-40(2017-11-20) ASSISTANT PROFESSOR IN HUMAN-CENTERED COMPUTING, Virginia Tech, USA

ASSISTANT PROFESSOR IN HUMAN-CENTERED COMPUTING

The Department of Computer Science at Virginia Tech (www.cs.vt.edu <http://www.cs.vt.edu/>) seeks applicants for a tenure-track assistant professor position in human-centered computing.  Exceptional candidates at higher ranks may also be considered. Strong candidates from any area related to human-computer interaction, user experience, or interactive computing are encouraged to apply. We especially encourage applicants with interests in novel interactive experiences and technologies—including immersive environments (virtual reality and augmented reality), multi-sensory displays, multi-modal input, visualization, visual analytics, human-robot interaction, game design, and creative technologies.

The successful candidate will have the opportunity to engage in transdisciplinary research, curriculum, and outreach initiatives with other university faculty working in the Creativity & Innovation (C&I) Strategic Growth Area, one of several new university-wide initiatives at Virginia Tech (see provost.vt.edu/destination-areas <http://provost.vt.edu/destination-areas>).  The C&I Strategic Growth Area is focused on empowering partners and stakeholders to collaborate on creativity, innovation, and entrepreneurship efforts that transcend disciplinary boundaries. Faculty working together in this area comprise a vibrant ecosystem that melds the exploration of innovative technologies and the design of creative experiences with best practices for developing impact-driven and meaningful outcomes and solutions.  Candidates with demonstrated experience in interdisciplinary teaching or research that aligns with the C&I vision (provost.vt.edu/destination-areas/sga-overview/sga-creativity.html <http://provost.vt.edu/destination-areas/sga-overview/sga-creativity.html>) are especially encouraged to apply. The successful candidate will also have opportunities for collaboration in the interdisciplinary Center for Human-Computer Interaction (www.hci.vt.edu <http://www.hci.vt.edu/>) that includes nearly 40 faculty across campus; the Institute for Creativity, Arts, and Technology (icat.vt.edu <http://icat.vt.edu/>) housed in the new Moss Arts Center; and the Discovery Analytics Center (dac.cs.vt.edu <http://dac.cs.vt.edu/>).

Applications must be submitted online to jobs.vt.edu <https://listings.jobs.vt.edu/postings/80519> for posting #TR0170152.  Applicant screening will begin on December 1, 2017 and continue until the position is filled. Inquiries should be directed to Dr. Doug Bowman, Search Committee Chair, dbowman@vt.edu <mailto:dbowman@vt.edu>.


--
Doug A. Bowman
Frank J. Maher Professor, Computer Science
Director, Center for Human-Computer Interaction
Fellow, Institute for Creativity, Arts, and Technology
Virginia Tech
dbowman@vt.edu
Personal: http://people.cs.vt.edu/~bowman/
Group: http://research.cs.vt.edu/3di/
Center: http://hci.vt.edu/
Twitter: @CHCI_VT

Back  Top

6-41(2017-11-20) Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning), University of East Finland, Finland
Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning)
 
The University of Eastern Finland, UEF, is one of the largest multidisciplinary universities in Finland. We offer education in nearly one hundred major subjects, and are home to approximately 15,000 students and 2,500 members of staff. From 1 August 2018 onwards, we?ll be operating on two campuses, in Joensuu and Kuopio. In international rankings, we are ranked among the leading universities in the world.
 
The Faculty of Science and Forestry operates on the Kuopio and Joensuu campuses of the University of Eastern Finland. The mission of the faculty is to carry out internationally recognised scientific research and to offer research-education in the fields of natural sciences and forest sciences. The faculty invests in all of the strategic research areas of the university. The faculty?s environments for research and learning are international, modern and multidisciplinary.  The faculty has approximately 3,800 Bachelor?s and Master?s degree students and some 490 postgraduate students. The number of staff amounts to 560. http://www.uef.fi/en/lumet/etusivu
 
We are now inviting applications for three Postdoctoral Researcher/Project Researcher positions in speech processing and deep learning funded by Academy of Finland, School of Computing, Joensuu Campus.
 
o Two positions in automatic speaker rec, voice conversion, anti-spoofing (NOTCH project)
o One position in deep reinforcement learning for physical agents (DEEPEN project)
 
The two projects share similarities in terms of machine learning methods being used and developed further, but are otherwise differently focused.
 
The NOTCH research project (NOn-cooperaTive speaker CHaracterization), being led by Associate Professor Tomi Kinnunen, aims at advancing state-of-the-art in automatic speaker verification (defense) and voice conversion (attack) under a generic umbrella of non-cooperative speech, whether being induced by spoofing attacks, disguise, or other intentional voice modifications. A successful applicant needs to have background in speaker verification, anti spoofing, voice conversion, machine learning or closely related topics.
 
The DEEPEN research project (Deep Reinforcement Learning for Physical Agents) is run in co operation between UEF and robotics group at Aalto University. UEF?s part, lead by Senior Researcher Ville Hautamäki, aims at designing new statistical models for simulated robot control and to take steps towards solving the so-called ?reality gap? problem.  The post-doc may also contribute to speech and deep learning topics. A successful applicant needs to have background in deep learning, reinforcement learning, speech technology or machine vision. Practical experience in DRL research environments (e.g. VizDoom or MuJoCo), will be counted as a plus.
 
The Machine Learning group of the School of Computing, at the facilities of Joensuu Science Park, provides access to modern research infrastructure and is a strongly international working environment. We hosted the Odyssey 2014 conference, were a partner in the H2020-funded OCTAVE project, and are a co-founder of the Automatic Speaker Verification and Countermeasures (ASVspoof) challenge series (http://www.asvspoof.org/).
 
A person to be appointed as a postdoctoral researcher shall hold a suitable doctoral degree that has been awarded less than five years ago. If the doctoral degree has been awarded more than five years ago, the post will be one of a project researcher. The doctoral degree should be in  spoken language technology, electrical engineering, computer science, machine learning or a closely related field.  Researchers finishing their PhD in the near future are also encouraged to apply for the positions.  However, they are expected to hold a PhD degree by the starting date of the position. We expect strong hands-on experience and creative out-of-the-box problem solving attitude. A successful applicant needs to have an internationally proven track record in topics relevant to the project he or she applies to.
 
English may be used as the language of instruction and supervision in these positions.
The positions will be filled from earliest January 1, 2018 for a period of 12 months. The continuation of the position will be agreed separately. The position will be filled for a fixed term due to pertaining to a specific project (Postdoctoral researcher positions shall always be filled for a fixed term, UEF University Regulations 31 §).
 
The salary of the position is determined in accordance with the salary system of Finnish universities and is based on level 5 of the job requirement level chart for teaching and research staff (?2.865,30/ month). In addition to the job requirement component, the salary includes a personal performance component, which may be a maximum of 46.3% of the job requirement component.
 
For further information on the position, please contact (NOTCH): Associate Professor Tomi Kinnunen, email: tkinnu@cs.uef.fi, tel. +358 50 442 2647 and (DEEPEN): Senior Researcher Ville Hautamäki, email: villeh@cs.uef.fi, tel. +358 50 511 8271.  For further information on the application procedure, please contact: Executive Head of Administration Arja Hirvonen, tel. +358 44 716 3422, email: arja.hirvonen@uef.fi.
 
A probationary period is applied to all new members of the staff.
You can use the same electronic form to apply for both research projects. The electronic application should contain the following appendices:
 
- a résumé or CV
- a list of publications
- copies of the applicant's academic degree certificates/ diplomas, and copies of certificates / diplomas relating to the applicant?s language proficiency, if not indicated in the academic degree certificates/diplomas
- motivation letter
- a cover letter indicating the position to be applied for
- The names and contact information of at least two referees are requested in the application form.
 
The application needs to be submitted no later than December 22, 2017 (by 24:00 EET) by using the electronic application form. Navigate to http://www.uef.fi/en/uef/en-open-positions and search for ?Three Postdoctoral Researchers/Project Researchers (Speech processing and deep learning)? to find the link to the electronic application form. 
Back  Top

6-42(2017-12-03) Machine Learning Engineer, Speech Recognition, Aja-la studios, Green Richmond UK

Machine Learning Engineer, Speech Recognition 
 
Location: London, UK Contact: hello@ajalastudios.com 
 
Summary & Opportunity 
 
AJA.LA Studios is a funded early-stage startup developing speech and natural language understanding technologies for under-resourced languages. We are looking to hire an engineer, to be based in London, to participate in 

developing acoustic and 

language models, and related algorithms, for our suite of proprietary speech recognition products 

for a broad library of under-resourced languages. This role provides a unique opportunity to pursue research and 

commercialization of speech recognition

 for under-resourced languages. 
 
Ideally, candidates should be comfortable working with large quantities of data, have an interest 

in and/or demonstrate 

experience working with under-resourced languages, an interest in working on the entire R&D/product-development cycle. 
 
Skills & Requirements 
 
The ideal candidate should possess a combination of the following skills and qualifications 

•Masters or PhD in an analytical discipline through which you have acquired a strong knowledge of 

topics including 

o Theory and practice of speech recognition and/or speech processing (LSCVR) 

o Signal Processing/Pattern Recognition 

o Probability theory 

o Bayesian inference 

o Machine learning and related topics 

• Strong software development skills o Required: C/C++, Python, CUDA/Nsight IDE, shell scripting, Perl, Github/SVN 

o Optional/Additional: Java/Android/Gradle/Android Studio, Objective C/Xcode/Cocos2dx 

• Speech processing, Neural Network and Natural Language platforms and libraries 

o Kaldi, KenLM, OpenFST, and HTS 

o Theano, PDNN, pyTorch, TensorFlow 

• Operating Systems: Unix/Linux/Mac OS 
 
 
 
 

 1 The Green Richmond, TW9 1PL UK www.ajalastudios.com 


Salary 
 
We offer a compentitive salary, pension contribution, private medical insurance, and share options, flexible working hours, 

amongst other benefits. 

Back  Top

6-43(2017-12-04) Machine Learning Engineer, Speech Synthesis , Aja-la studios,Green Richmond UK

Machine Learning Engineer, Speech Synthesis 
 
Location: London, UK Contact: hello@ajalastudios.com 
 
Summary & Opportunity 
 
AJA.LA Studios is a funded early-stage startup developing speech and natural language understanding technologies for under-resourced languages. We are looking to hire an engineer, to be based in London, to participate in developing unit-selection and parametric speech synthesis for a broad library of under-resourced languages. This role provides a unique opportunity to pursue research and commercialization of speech recognition for under-resourced languages. 
 
Ideally, candidates should be comfortable working with large quantities of data, have an interest in and/or demonstrate experience working with under-resourced languages, an interest in working on the entire R&D/product-development cycle, and possess the following skills and qualifications 
 
Skills & Requirements 
 
• Masters or PhD in an analytical discipline through which you have acquired a strong knowledge of 

topics including 

o Theory and practice of speech synthesis and/or speech processing, e.g. vocoding 

o Signal Processing/Pattern Recognition 

o Probability theory o Bayesian inference 

o Machine learning and related topics 

• Strong software development skills 

o Required: C/C++, Python, CUDA/Nsight IDE, shell scripting, Perl, Github/SVN 

o Optional/Additional: Java/Android/Gradle/Android Studio, Objective C/Xcode/Cocos2dx 

• Speech processing, Neural Network and Natural Language platforms and libraries 

o Festival, HTK, and HTS 

o Theano, PDNN, pyTorch, TensorFlow 

• Operating Systems: Unix/Linux/Mac OS 
 
 
 
 

 1 The Green Richmond, TW9 1PL UK 

www.ajalastudios.com 
Salary 
 
We offer a compentitive salary, pension contribution, private medical insurance, and share options, flexible working hours, 

amongst other benefits.

Back  Top

6-44(2017-12-03) Research Assistant/Associate in Speech Processing, at Cambridge University Engineering Department, Cambridge, UK.
Research Assistant/Associate in Speech Processing, at Cambridge University Engineering Department, Cambridge, UK.
Back  Top

6-45(2017-12-05) One-year post-doctoral position in speech production, GIPSA, Grenoble, France

One-year post-doctoral position in speech production, in the framework of the StopNCo ANR project (http://www.agence-nationale-recherche.fr/Project-ANR-14-CE30-0017 <http://www.agence-nationale-recherche.fr/Project-ANR-14-CE30-0017>), starting from March 2018  (at the latest in October 2018).
More details at: https://www.gipsa-lab.grenoble-inp.fr/~maeva.garnier/mes_documents/PostDocPosition-StopNCo.pdf
I would be thankful if you could circulate this job offer in your research institution and forward it to anyone who may be interested in.



Maëva Garnier




Back  Top

6-46(2017-12-06) PhD Position in Social Signal Processing for Multi-Sensor Conversation Quality Modeling, Delft University, The Netherlands

Job Link: https://tinyurl.com/MINGLEPhD

PhD Position in Social Signal Processing for Multi-Sensor Conversation Quality Modeling

Location: Delft University of Technology, The Netherlands

Deadline: January 12 2018 (see below for application procedure)

Project Description:

An important but under-explored problem in computer science is the automated analysis of conversational dynamics in large unstructured social gatherings such as networking or mingling events. Research has shown that attending such events contributes greatly to career and personal success. While much progress has been made in the analysis of small pre-arranged conversations, scaling up robustly presents a number of fundamentally different challenges.

Unlike analysing small pre-arranged conversations, during mingling, sensor data is seriously contaminated. Moreover, determining who is talking with whom is difficult because groups can split and merge at will. A fundamentally different approach is needed to handle both the complexity of the social situation as well as the uncertainty of the sensor data when analysing such scenes.

The successful applicants will develop automated techniques to analyse multi-sensor data (video, acceleration, audio, etc) of human social behavior. They will work as part of a team on the NWO Funded Vidi project MINGLE (Modelling Group Dynamics in Complex Conversational Scenes from Non-Verbal Behaviour). They will have the opportunity to interact with researchers from both computer science and social science both locally and internationally.

The main aim of the project is to address the following question: How can multi-sensor processing and machine learning methods be developed to model the dynamics of conversational interaction in large social gatherings using only non-verbal behaviour? The two project advertised focus on developing novel computational methods to measure conversation quality (e.g. involvement, rapport) from multi-sensor streams in crowded environments

 

Job requirements:

We are looking students who have recently completed or expect very soon an MSc or equivalent degree in computer science, electrical/electronic engineering, applied mathematics, applied physics, or a related discipline. Experience in the following or related fields are preferred: signal/audio/speech processing, computer vision, machine learning, and pattern recognition. Some experience with embedded systems is a bonus, though not necessary.

 

The successful applicant will have:- good programming skills;- curiosity and analytical skills;- the ability to work in a multi-disciplinary team;- motivation to meet deadlines;- an affinity with the relevant social science research;- good oral and written communication skills;-proficiency in English;- an interest in communicating their research results to a wider audience;

Institution:

 

The department Intelligent Systems is part of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS) at Delft University of Technology. The faculty offers an internationally competitive interdisciplinary setting for its 500 employees, 350 PhD students and 1700 undergraduates. Together they work on a broad range of technical innovations in the fields of sustainable energy, quantum engineering, microelectronics, intelligent systems, software technology, and applied mathematics.???

The Pattern Recognition and BioInformatics Group is one of five groups in the department, consisting of 7 faculty and over 20 postdoc and PhD students. Within this group, research is carried out in three core subjects; pattern recognition, computer vision, and bioinformatics. One of the main focuses of the group is on developing tools and theories, and gaining knowledge and understanding applicable to a broad range of general problems but typically involving sensory data, e.g. times signals, images, video streams, or other physical measurement data.

For information about the TU Delft Graduate School, please visit www.phd.tudelft.nl.????

Application Procedure:

Interested applicants should send an up-to-date curriculum vitae, degree transcripts, letter of application, and the names and the contact information (telephone number and email address) of two references to Hr-eemcs@tudelft.nl with the subject heading '[MINGLE PhD]'.

The letter of application should summarise (i) why the applicant wants to do a PhD, (ii) why the project is of interest to the applicant, (iii) evidence of suitability for the job, and (iv) what the applicant hopes to gain from the position.

The application procedure is ongoing until the position is filled, so interested candidates are encouraged to apply as soon as possible and before January 12 2018. Note that candidates who apply after this deadline may still be considered but applications before the deadline will be given priority.

Back  Top

6-47(2017-12-08) PhD grant at IRISA, Rennes France

L'équipe Expression de l'IRISA recrute un.e doctorant.e en
informatique sur le sujet 'caractérisation de registres de langue par
extraction de motifs séquentiels' dans le cadre du projet ANR TREMoLo.

 

Détails de l'offre :

https://www-expression.irisa.fr/files/2017/12/these_TREMoLo_2017.pdf
 
Dossier de candidature (* : éléments obligatoires) :
- CV détaillé*
- lettre de motivation*
- relevés de notes (avec classement si possible)*
- contacts pour recommandation*
- rapport(s) de stage recherche.
 
Envoyer à : del.battistelli@gmail.com, nicolas.bechet@irisa.fr,
gwenole.lecorve@irisa.fr.
 
Cordialement,
Gwénolé Lecorvé.
Back  Top

6-48(2017-12-15) Internship 1 at LIA, Avignon, France

Adaptation des réseaux de neurones profonds pour les systèmes 

de transcription de la parole 
 
Mots-clés : système de transcription de la parole, modèle de langage, adaptation nonsupervisée 
 
Description La Reconnaissance Automatique de la Parole (RAP) consiste à transcrire en texte les mots 

prononcés dans un enregistrement audio ou vidéo. Les systèmes de RAP les plus robustes reposent 

souvent sur une architecture multi-passe (Gauvain et Lee 1994) (Gales 1998), 

chaque passe permettant d’obtenir une transcription du signal audio qui se veut de meilleure qualité 

que la précédente. 

Ainsi, dans certains cas, les sorties de la passe précédente sont utilisées pour adapter les modèles de la 

passe en cours. L’idée de cette adaptation est d’obtenir des modèles spécialisés à l’enregistrement, et 

donc d’être plus robuste face aux « variabilités » des enregistrements audio (conditions acoustiques différentes,

 locuteurs inconnus, spontanéité de la parole, bruits de l’environnement...). 
 
L’objectif général du stage est de faire progresser l’état de l’art sur la transcription automatique de la parole.  

Plus précisément, le stage explorera l’adaptation non-supervisée des réseaux de neurones profonds. 

Un des principaux challenges est d’utiliser les réseaux de neurones en tant que modèle de langage et de pouvoir 

les adapter à une première transcription issue du décodage. 
 
Ce sujet pourra donner lieu à une thèse. 
 
Profil du candidat Etudiant en Master 2 en informatique. Le candidat devra posséder un bon niveau en 

programmation (C/C++ et/ou Python). Des notions en Traitement Automatique de la Langue, Traitement de la parole 

ou Apprentissage automatique serait un plus. 
 
Lieu du stage LIA, 339, chemin des Meinajariès, 84911 Avignon 
 
Durée et rémunération 6 mois, environ 580€ par mois. 
 
Contact Mickaël Rouvier – Maître de conférence – mickael.rouvier@univ-avignon.fr Richard Dufour – Maître de conférence – richard.dufour@univ-avignon.fr 
 
Bibliographie Gales, Mark JF. «Maximum likelihood linear transformations for HMM-based speech recognition.» 

Computer Speech and Language (CSL), 1998. Gauvain, Jean-Luc, et Chin-Hui Lee. «Maximum 

a posteriori estimation for multivariate Gaussian mixture observations of Markov chains.» IEEE Transactions on 

Speech and Audio Processing (TASP), 1994. 

Back  Top

6-49(2017-12-15) Internship 2 at LIA Avignon, France

Résumé vidéo automatique par contextualisation de vidéo à partir d’un texte 
 
 
Mots-clés : résumé automatique par extraction 
 
Description Le résumé automatique est un moyen de produire des synthèses qui extraient l’essentiel des contenus et 

les présentent de façon aussi concise que possible. Dans ce stage nous nous intéressons aux méthodes de résumé vidéo 

par extraction basées sur l’analyse du texte [Li11, Trione14, Favre15]. 
 
Une des approches classiques suivie pour la génération automatique de résumé vidéos consiste à passer par une 

représentation intermédiaire textuelle : le contenu audio de la vidéo (et parfois les textes incrustés) sont extraits, 

transcrits puis résumés. Ce résumé texte est ensuite utilisé pour assembler un résumé vidéo. L’objectif général du 

stage est d’explorer des méthodes de contextualisation de vidéos ou d’images à partir de la transcription texte. 

Cette contextualisation doit aider à la composition du résumé vidéo final. 
 
Ce sujet pourra donner lieu à une thèse. 
 
Profil du candidat Etudiant en Master 2 en informatique. Le candidat devra posséder un bon niveau en programmation 

(C/C++ et/ou Python). Des notions en Traitement Automatique de la Langue ou Apprentissage automatique seraient un plus. 
 
Lieu du stage LIA, 339, chemin des Meinajariès, 84911 Avignon 
 
Durée et rémunération 6 mois, environ 580€ par mois. 
 
Contact Mickaël Rouvier – Maître de conférence – mickael.rouvier@univ-avignon.fr Richard Dufour – Maître de conférence – richard.dufour@univ-avignon.fr 
 
Bibliographie [Li11] Li, Y., Merialdo, B., Rouvier, M., & Linares, G. (2011). Static and dynamic video summaries. In Proceedings 

of the 19th ACM international conference on Multimedia (pp. 1573-1576). ACM. 

[Trione14] Trione, J. (2014). Extraction methods for automatic summarization of spoken conversations from call centers 

(Méthodes par extraction pour le résumé automatique de conversations parlées provenant de centres d’appels)[in French]. In Proceedings of TALN 2014 (Volume 4: RECITAL-Student Research Workshop) (Vol. 4, pp. 104-111).

 [Favre15] Favre, B., Stepanov, E. A., Trione, J., Béchet, F., & Riccardi, G. (2015). Call Centre Conversation Summarization: 

A Pilot Task at Multiling 2015. In SIGDIAL Conference (pp. 232-236). 

Back  Top

6-50(2017-12-13) Internship and PhD position at Telecom-ParisTech and LTCI lab, Paris, France

 

Internship and PhD position in machine learning for multimodal engagement analysis

in human-robot interactions (HRI)

 

 

                                                                                    

Telecom ParisTech [1],  LTCI lab [2]


Duration:  6-month internship to be continued as 3-year PhD contract
Start: Any date from February 1st, 2018

Salary: according to background and experience

 

           

*Position description*

 

The internship/PhD project will take part in a collaboration between Softbank Robotics and Télécom ParisTech on the topic of engagement analysis in interactions of humans with Softbank?s robots.

The role of the intern/PhD student will consist in developing robust machine learning systems able to effectively take advantage of the multimodal signals acquired by the robot?s sensors during its interaction with a human. The work will include:

- the design of appropriate elicitation protocols and multimodal data acquisition procedures ;

- the development of multimodal feature learning and dynamic classification procedures capable of handling noisy observations with missing values, especially exploiting deep learning techniques ;

- the evaluation of the system in realistic scenarios involving end-users.

The PhD project will be hosted at Telecom ParisTech  department of images, data and signals of [3], jointly by the social computing [4] and the audio data analysis and signal processing [5] teams.


* Candidate profile*

 

As a minimum requirement, the successful candidate will have:

 

?    A Master?s degree (possibly to be granted in 2018) in one of the following areas: computer science, artificial intelligence, machine learning, signal processing, affective computing, applied mathematics

?    Excellent programming skills (preferably in Python)

?    Good command of English

 

The ideal candidate will also (optionally) have:

?    Knowledge in deep learning techniques

 

-- More about the position

?    Place of work: Paris, France

?    For more information about Télécom ParisTech see [1]

 

-- How to apply

Applications are to be sent to Chloé Clavel [6], Giovanna Varni [7] and Slim Essid [8] by email (using <firstname.lastname>@telecom-paristech.fr)

 

The application should be formatted as a single pdf file and should include:

?    A complete and detailed curriculum vitae

?    A letter of motivation

?    Academic records of the last two years

?    The names and addresses of two referees

 

[1] http://www.tsi.telecom-paristech.fr

[2] https://www.ltci.telecom-paristech.fr/?lang=en

[3] http://www.tsi.telecom-paristech.fr/en/

[4]https://www.tsi.telecom-paristech.fr/recherche/themes-de-recherche/analyse-automatique-des-donnees-sociales-social-computing/

[5] http://www.tsi.telecom-paristech.fr/aao/en/

[6] https://clavel.wp.mines-telecom.fr/

[7] http://sites.google.com/site/gvarnisite/

[8] http://www.telecom-paristech.fr/~essid

Back  Top

6-51(2017-12-16) Position at INA, Bry/Marne, France

L’Institut national de l’audiovisuel (INA), entreprise publique audiovisuelle et numérique, collecte, sauvegarde et transmet le patrimoine audiovisuel français. Dans une démarche d’innovation tournée vers les usages, l’INA valorise ses contenus et les partage avec le plus grand nombre : sur ina.fr pour le grand public, sur inamediapro.com pour les professionnels, à l’InaTHÈQUE pour les chercheurs. L’institut développe ainsi des offres et des services afin de se rapprocher de ses usagers et clients, en France comme à l’international. 

 

Son département Recherche et Innovation soutient une culture de l’innovation forte et ambitieuse. Notre technologie Ina-Signature (technologie de « fingerprint ») – issue de la R&D de l’Ina - a su s’imposer auprès de clients renommés, grâce à une stratégique axée sur la performance et la qualité. Notre offre continue à évoluer, avec la démocratisation du SAAS (software as a service) et du CLOUD. 

 

Dans le cadre de votre mission, rattaché/e au Chef du service de la Recherche, vous garantissez la conception, la mise en oeuvre, l'intégration ou l'adaptation des technologies d’apprentissage automatique, d’analyse et de fusion de données dans le cadre des projets de Recherche pour l’expérimentation de nouveaux usages de valorisation des contenus.

 

A ce titre, vous serez en charge de :

 

1 – Effectuer de la Recherche scientifique et technologique

-        Définir les axes de recherche et développement liés à cette thématique ;

-        Concevoir, implémenter, tester, évaluer des outils technologiques innovants dans le cadre des usages existants ou pressentis de l’Institut ;

-        Collaborer avec l’ensemble des acteurs internes et externes du département ;

-        Participer à la stratégie de recherche et développement du service ;

-        Encadrer des stagiaires et à terme des doctorants ;

-        Rédiger ou participer à la rédaction d’articles scientifiques et présenter ces articles dans des colloques ;

-        Démontrer les travaux de recherche lors de colloques, séminaire ou salons :

-        Participer à la rédaction des documents liés à l’activité (rapports d’activité, livrables des projets en particulier).

 

2 – Assurer une R&D au service de l’Institut

-        Proposer, préparer, coordonner, participer à des projets de Recherche et Développement internes en lien avec les services opérationnels ;

-        Proposer, piloter, participer à des actions de concertation et de réflexion internes et groupes de travail.

 

3 – Réaliser des partenariats

-        Proposer, préparer, coordonner, participer à des projets de Recherche et Développement collaboratifs, nationaux ou internationaux en lien avec des partenaires académiques, institutionnels ou industriels ;

-        Proposer, coordonner, participer à des instances de coopération scientifique et technologique (COMUE, Pôles de compétitivité, Groupes de recherche).

 

4 – Collaborer au management fonctionnel

-        Participer à la coordination du service (réunions de coordination) ;

-        Participer aux tâches de gestion des ressources informatiques et techniques du service ;

-        Participer à la vie du service (réunions de service, suivis d’activité, rapports).

 

Profil :

Vous justifierez d'un doctorat dans le domaine de l’apprentissage automatique et/ou de l’analyse de données ou d'un parcours professionnel admis en équivalence.

 

Complété de compétences en : 

-        Maîtrise et expérience dans le(s) domaine(s) suivants : apprentissage automatique (Deep Learning), analyse et fusion de données, analyse de l’image et/ou de l’audio, développement informatique

-        Bonne pratique en recherche académique et/ou industrielle ;

-        Pratique en publications scientifiques ;

-        Bonne connaissance et pratique de projets collaboratifs ;

-        Connaissance du paysage audiovisuel français ;

-        Connaissance du monde académique ;

-        Maîtrise des outils bureautiques ;

-        Intérêt pour le monde de l’audiovisuel et des médias ;

-        Intérêt pour les Sciences Humaines et Sociales et les Humanités Numériques

 

Des qualités d’analyse et de synthèse, de créativité et d’imagination, de force de proposition, relationnelles et d’esprit d’équipe seront vos meilleurs atouts pour réussir dans le poste.

 

Modalités du poste :

-        Contrat : CDI

-        Statut : Cadre

-        Poste à pourvoir : au plus vite

-        Salaire : selon expérience

-        Clôture de la consultation : 31 janvier 2018

-        Contact : jcarrive@ina.fr

-        Localisation géographique : Bry S/Marne (94)

 

 

 

 

Jean Carrive

Responsable de Département adjoint

Recherche et Innovation numérique

Direction déléguée à la Diffusion et à l'Innovation

Ligne directe : +33 1 49 83 34 29 - jcarrive@ina.fr

 

 

institut.ina.fr

Back  Top

6-52(2017-12-16) Post-doc position at Uniklinik RWTH Aachen (Germany)

We are looking at the Uniklinik RWTH Aachen (Germany) for a postdoctoral researcher in
the field of articulatory modelling of the vocal tract for the analysis of dysarthria
from MRI images.

The position is for 14 months on the German pay scale TV-L 13 (typically around 2300? net
after all tax deduction) and expected to start around March 2018.

All details can be found here:
http://antoine.serrurier.free.fr/index_documents/2017_12_PostDoc_DysArtMod_EN.pdf

 

Back  Top

6-53​PhD position in Conversational systems and Social robotics, KTH, Stockholm, Sweden
PhD position in Conversational systems and Social robotics, KTH, Sweden

KTH Royal Institute of Technology in Stockholm has grown to become one
of Europe?s leading technical and engineering universities, as well as
a key centre of intellectual talent and innovation. We are Sweden?s
largest technical research and learning institution and home to
students, researchers and faculty from around the world.

We are looking for a doctoral student that will work on situated
spoken interaction between humans and robots, under the supervision of
Assoc. Prof. Gabriel Skantze, at the Department of Speech Music and
Hearing. A central research question will be how social robots should
adapt their conversational behavior to the users' level of attention,
understanding and engagement. This means that the robot must be able
to monitor gaze and feedback behaviour from the user, and then for
example adjust the pace of information delivery, in real time. The
work will involve implementation of components for conversational
systems, collecting data and doing experiments with users interacting
with the system, and using this data to build models of the users'
behaviours.

Applicants should have a Master degree (or similar) in a subject
relevant for the research, such as computer science, language
technology, or cognitive science. Applicants are expected to have good
skills in programming, and knowledge in either experimental methods
and statistics, or machine learning. Applicants must be strongly
motivated for doctoral studies, possess the ability to work
independently and perform critical analysis, and possess good levels
of cooperative and communicative abilities. Good command of English,
in writing and speaking, is a prerequisite for presenting research
results in international periodicals and at conferences. We also
expect applicants to have a deep interest in spoken language
interaction between humans and between humans and machines.

The position is mainly a research position for 4-5 years, with a small
fraction of departmental duties (e.g. teaching). The starting date is
open for discussion, though ideally we would like the successful
candidate to start as soon as possible.

For more information, see:
https://www.kth.se/en/om/work-at-kth/lediga-jobb/what:job/jobID:178626/where:4/
Back  Top

6-54(2017-12-13) 2 funded PhD positions in interactive virtual characters and social robots at KTH, Stockholm, Sweden
** 2 funded PhD positions in interactive virtual characters and social robots at KTH, Sweden**
 
Embodied Social Agents Lab
KTH Royal Institute of Technology
Stockholm, Sweden
Deadline: 15th January 2018
 
 
ABOUT KTH
 
KTH Royal Institute of Technology in Stockholm has grown to become one of Europe?s leading technical and engineering universities, as well as a key center of intellectual talent and innovation. We are Sweden?s largest technical research and learning institution and home to students, researchers and faculty from around the world. Our research and education covers a wide area including natural sciences and all branches of engineering, as well as in architecture, industrial management, urban planning, history and philosophy.
 
The Embodied Social Agents Lab (http://www.csc.kth.se/~chpeters/ESAL/) led by Dr. Christopher Peters aims to develop virtual characters and other systems capable of interacting socially with humans for real-world application to areas such as education. The lab is already involved in a number of local and international initiatives involving virtual characters, social robots and education. It is based out of the Visualization Studio (VIC) at KTH, a research, teaching and dissemination resource with some of the most advanced interactive visualization technologies in the world, supporting platforms for interacting with sophisticated virtual characters.
 
 
JOB DESCRIPTION
 
Two PhD positions are available in the area of interactive virtual characters and social robots for application to education. Research in this area brings together multidisciplinary expertise to address new challenges and opportunities in the area of virtual characters, based on real-time computer graphics and animation techniques, to investigate multimodal and natural interaction for both individuals and groups, multimodal generation of expressions, individualization of behaviour and effects of embodiment (appearance, virtual versus physical objects).  Applications are the design of interactive virtual and physical systems for educational purposes.
 
The topics to be pursued respectively in the PhDs are:
 
1.       Compliant Small Group Behaviour (ref: ESR5)
Develop socially compliant behaviours allowing agents to join and leave free-standing formations based on their varying roles as teachers, teaching assistants and learners in pedagogical scenarios. Investigate the impact of variations in the artificial behaviour of agents on the efficacy of pedagogical approaches and potential for application to mobile robots through virtual replicas.
 
2.       Impact of Appearance Customisation on Interaction (ref: ESR15)
Investigate technological approaches for customising the appearances and behaviours of avatars (user controlled virtual characters and robot replicas) in relation to their users and assess the impact on interactions during learning scenarios.
 
Both of the PhDs involve crossovers between virtual and augmented reality, virtual characters and mobile social robots and take place within the Horizon 2020 Marie Sklodowska Curie European Training Network ANIMATAS.
ANIMATAS will establish a leading European Training Network (ETN) devoted to the development of a new generation of creative and critical research leaders and innovators who have a skill-set tailored for the creation of social capabilities necessary for realising step changes in the development of intuitive human-machine interaction (HMI) in educational settings. 15 early-stage researcher (ESR) positions are available within ANIMATAS.
The successful candidates will participate in the network?s training activities offered by the European academic and industrial participating teams. PhD students will have the opportunity to work with the partners of the ANIMATAS project, such as Uppsala University, Jacobs University Bremen, Institut Mines-Télécom, University of Wisconsin-Madison, Pierre et Marie Curie University and Softbank Robotics, with possible opportunities for secondments at these institutions according to the ESR.
 
 
QUALIFICATIONS
 
The candidates must have an MSc degree in computer science or related areas relevant to the PhD topics. Good programming skills are required. A background in computer graphics and animation techniques or similar areas is appreciated. The PhD positions are highly interdisciplinary and require an understanding and/or interest in psychology and social sciences. The applicant should have excellent communication skills and be motivated to work in an interdisciplinary environment involving multiple stakeholders across academia, industry and education. An excellent level of written and spoken English is essential.
 
Read more about eligibility requirements at this link: http://animatas.isir.upmc.fr
The positions are for four years.
 
 
HOW TO APPLY
 
To apply, candidates must submit their CV, a letter of application, two letters of reference and academic credentials to the ANIMATAS recruitment committee: Mohamed Chetouani (network coordinator), Ana Paiva and Arvid Kappas at contact-animatas@listes.upmc.fr, and to the main supervisor of the research project of interest (Christopher Peters, chpeters@kth.se). All applications should be made in English.
Please include the keyword ?ANIMATAS? somewhere in the subject line and specify which project you are applying for (ESR5 or ESR15).
 
The application deadline is 15th January 2018
 
Information about the positions can be provided by Dr. Christopher Peters, chpeters@kth.se
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA