| 5th CHiME Speech Separation and Recognition Challenge
Deadline: August 3, 2018
Workshop: Microsoft, Hyderabad, Sep 7, 2018
http://spandh.dcs.shef.ac.uk/chime_challenge/ ----------------------------------------------
Dear colleague,
It gives us great pleasure to announce the official launch of the CHiME-5 Challenge.
CHiME-5 considers the problem of distant-microphone conversational speech recognition in everyday home environments. Speech material was elicited using a dinner party scenario with efforts taken to capture data that is representative of natural conversational speech. Participants may use a single microphone array or multiple distributed arrays.
FORUM
If you are considering participating, please join the CHiME-5 Google group for discussions and further announcements. https://groups.google.com/forum/#!forum/chime5/join
MATERIALS
The Challenge website is now live and contains all the information and data that you will need for participation: - a detailed description of the challenge scenario and recording conditions, - real training and development data, - full instructions for participation and submission.
Baseline software for array synchronization, speech enhancement, and state-of-the-art speech recognition will be provided on March 12.
If you have a question that isn't answered by the website and you expect other participants to have the answer or to be interested by the answer, please post it on the forum. Otherwise, please email us: chimechallenge@gmail.com.
We look forward to your participation.
IMPORTANT DATES
5th March, 2018 ? Training and development set data released 12th March, 2018 ? Baseline recognition system released 10th June, 2018 ? Workshop registration open June/July, 2018 ? Test data released 3rd Aug, 2018 ? Extended abstract and challenge submission deadline 20th Aug, 2018 ? Author notification 31st August, 2018 ? Workshop registration deadline 7th Sept, 2018 ? CHiME-5 Workshop (satellite of Interspeech 2018) and release of results 8th Oct, 2018 ? Final paper (2 to 6 pages)
ORGANISERS
Jon Barker, University of Sheffield Shinji Watanabe, Johns Hopkins University Emmanuel Vincent, Inria
SPONSORS
Google Microsoft Research
SUPPORTED BY
International Speech Communication Association (ISCA) ISCA Robust Speech Processing SIG |