ISCA - International Speech
Communication Association


ISCApad Archive  »  2022  »  ISCApad #288  »  Jobs  »  (2022-02-01) Two post-docs at ADAPT, Dublin, Irland

ISCApad #288

Friday, June 10, 2022 by Chris Wellekens

6-3 (2022-02-01) Two post-docs at ADAPT, Dublin, Irland
  
We are looking to recruit two Post-Doctoral Researchers to join the UCD team in the ADAPT Research Centre (www.adaptcentre.ie).  These projects are part of the ADAPT Digital Content Transformation Strand.
 
1) Harnessing Speech for Social Inclusion
Rather than focusing on accuracy and error rates, evaluation of speech recognition systems should be contextualized with respect to how well they perform in situations where the interlocutor is not the ‘typical’ native speaker (e.g. a senior citizen, a citizen with disabilities or a non-native speaker). Through publicly engaged research, data will be collected on how well existing speech technology is serving our citizens. A systematic evaluation of the output data produced by existing ASR systems and the interactions that arise when an ASR ‘converses’ with a range of users will enable a categorization of interaction issues and error patterns that need to be accounted for when developing applications which provide interfaces to essential services. This project will involve the development of a system which facilitates diagnostic evaluation of ASRs in a variety of interaction scenarios providing linguistic cues the augmentation of the ASR and building on the low-resource MT technologies being developed in other ADAPT projects.
 
2) Embedding of Multi-level Speech Representations
While deep learning has led to huge performance gains in speech recognition and synthesis, only recently more focus is being placed on what deep learning may be able uncover about the patterns which humans use intuitively when interacting via speech and which distinguish native from non-native speakers. Such patterns are typically the focus of speech perception and experimental phonetic studies. This project aims to build on the notion of multi-linear or multi-tiered representations of speech, creating embeddings of multiple (sub- word) levels of representation – phonetic features, phonemes, syllable pieces and syllables – enabling a closer investigation of systematicity and variability of speech patterns. This research will find application in non-native speech recognition, in speech adaptation/accommodation for native and non-native interactions and in pronunciation training scenarios.
 
ADAPT is the world-leading SFI research centre for AI Driven Digital Content Technology, coordinated by Trinity College Dublin and based within Dublin City University, University College Dublin, Technological University Dublin, Maynooth University, Munster Technological University, Athlone Institute of Technology, and the National University of Ireland Galway. ADAPT’s research vision is to pioneer new forms of proactive, scalable, and integrated AI-driven Digital Content Technology that empower individuals and society to engage in digital experiences with control, inclusion, and accountability with the long-term goal of a balanced digital society by 2030. ADAPT is pioneering new Human Centric AI techniques and technologies including personalisation, natural language processing, data analytics, intelligent machine translation human-computer interaction, as well as setting the standards for data governance, privacy and ethics for digital content.

ADAPT Digital Content Transformation Strand
From the algorithmic perspective, new machine learning techniques will both enable more users to engage meaningfully with the increasing volumes of content globally in a more measurably effective manner, while ensuring the widest linguistic and cultural inclusion. It will enhance effective, robust integrated machine learning algorithms needed to provide multimodal content experiences with new levels of accuracy, multilingualism and explainability.
 
Full details of the positions, requirements and link to submit applications can be found at:
 
 
The closing date is 17:00hrs (local Irish time) on Monday 14th February 2022 and and candidates must apply via https://www.ucd.ie/workatucd/jobs/  
 
The reference is 
 
1) 014028 for Harnessing Speech for Social Inclusion
2) 014079 for Embedding of Multi-level Speech Representations
 

Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA