ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #181  »  Jobs  »  (2013-05-01) Two positions at CSTR at the University of Edinburgh Scotland UK

ISCApad #181

Wednesday, July 10, 2013 by Chris Wellekens

6-23 (2013-05-01) Two positions at CSTR at the University of Edinburgh Scotland UK
  
1.

Marie Curie Research Fellow in Speech Synthesis and Speech Perception

'Using statistical parametric speech synthesis to investigate speech perception'

The Centre for Speech Technology Research (CSTR) 
University of Edinburgh

This is a rare opportunity to hold a prestigious individual fellowship in a world-leading research group at a top-ranked University, mentored by leading researchers in the field of speech technology. Marie Curie Experienced Research Fellowships are aimed at the most talented newly-qualified postdoctoral researchers, who have the potential to become leaders in their fields. This competitively salaried fellowship offers an outstanding young scientist the opportunity to kick-start his or her independent research career in speech technology, speech science or laboratory phonetics. 

This fellowship is part of the INSPIRE Network (http://www.inspire-itn.eu) and the project that the CSTR Fellow will spearhead involves developing statistical parametric speech synthesis into a toolbox that can be used to investigate issues in speech perception and understanding. There are excellent opportunities for collaborative working and joint publication with other members of the network, and generous funding for travel to visit partner sites, and to attend conferences and workshops.

The successful candidate should have a PhD (or be near completion) in computer science, engineering, linguistics, mathematics, or a related discipline. He or she should have strong programming skills and experience with statistical parametric speech synthesis, as well as an appropriate level of ability and experience in machine learning. The fellowship is fixed term for 12 month (to start as soon as possible). CSTR is a successful and well-funded group, and so there are excellent prospects for further employment after the completion of the fellowship.

The Marie Curie programme places no restrictions on nationality: applicants can be of any nationality and currently resident in any country worldwide, provided they meet the eligibility requirements set out in the full job description (available online - URL below). 

Salary:  GBP 42,054 to GBP 46,731 plus mobility allowance 

Informal enquiries about this position should be made to Prof Simon King (Simon.King@ed.ac.uk) or Dr Cassie Mayo (catherin@inf.ed.ac.uk). 


Apply online:

https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=013062

Closing date: 10 Jun 2013



2.


An Open Position for Postdoctoral Research Associate in Speech Synthesis 

The Centre for Speech Technology Research (CSTR) 
University of Edinburgh

This post holder will contribute to our ongoing research in statistical parametric ('HMM-based') speech synthesis, working closely with Principal Investigators Dr. Junichi Yamagishi and Prof. Simon King, in addition to other CSTR researchers. The focus of this position will be to conduct research into methods for generating highly intelligible synthetic speech, for a variety of applications, in the context of three ongoing and intersecting projects in CSTR: 

The 'SALB' project concerns the generation of extremely fast, but highly intelligible, synthetic speech for blind children. This is a joint project with the Telecommunications Research Centre Vienna (FTW) in Austria, and is funded by the Austrian Federal Ministry of Science and Research. 

The 'Voice Bank' project concerns the building of synthetic speech using a very large set of recordings of amateur speakers (‘voice donors’) in order to produce personalised voices for people whose speech is disordered, due to Motor Neurone Disease. This is a joint project with the Euan MacDonald Centre for MND research, and is funded by the Medical Research Council. The main tasks will be to conduct research into automatic intelligibility assessment of disordered speech and to devise automatic methods for data selection from the large voice bank. 

The 'Simple4All' project is a large multi-site EU FP7 project led by CSTR which is developing methods for unsupervised and semi-supervised learning for speech synthesis, in order to create complete text-to-speech systems for any language or domain without relying on expensive linguistic resources, such as labelled data. The main tasks here will be to further the overall goals of the project, including contributing original research ideas. There is considerable flexibility in the research directions available within the Simple4All project and the potential for the post holder to form a network of international collaborators. 

The successful candidate should have a PhD (or be near completion) in computer science, engineering, linguistics, mathematics, or a related discipline. He or she should have strong programming skills and experience with statistical parametric speech synthesis. 

Whilst the advertised position is for 24 months (due to the particular projects that the post-holder will contribute to), CSTR is a stable, well-funded and successful group with a tradition of maintaining long-term support for ongoing lines of research and of building the careers of its research fellows. We expect to obtain further grant-funded research projects in the future. 

Informal enquiries about this position to either Dr. Junichi Yamagishi (jyamagis@inf.ed.ac.uk) or Prof. Simon King (Simon.King@ed.ac.uk). 

Apply Online: 
https://www.vacancies.ed.ac.uk/pls/corehrrecruit/erq_jobspec_version_4.jobspec?p_id=013063

Closing date: 10 Jun 2013

 


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA