ISCApad #236 |
Saturday, February 10, 2018 by Chris Wellekens |
4-1 | New Masters in Machine Learning, Speech and Language Processing at Cambridge University, UK New Masters in Machine Learning, Speech and Language Processing
This is a new twelve-month full-time MPhil programme offered by the Computational and Biological Learning Group (CBL) and the Speech Group in the Cambridge University Department of Engineering, with a unique, joint emphasis on both machine learning and on speech and language technology. The course aims: to teach the state of the art in machine learning, speech and language processing; to give students the skills and expertise necessary to take leading roles in industry; to equip students with the research skills necessary for doctoral study.
UK and EU students applications should be completed by 9 January 2015 for admission in October 2015. A limited number of studentships may be available for exceptional UK and eligible EU applicants.
Self-funding students who do not wish to be considered for support from the Cambridge Trusts have until 30 June 2015 to submit their complete applications.
More information about the course can be found here: http://www.mlsalt.eng.cam.ac.uk/
| ||
4-2 | Master Informatique en Apprentissage et Traitement Automatique de la Langue : ATAL. Universités du Maine et de Nantes France Les Universités du Maine et de Nantes propose un parcours conjoint de Master Informatique en Apprentissage et Traitement Automatique de la Langue : ATAL !
Le parcours ATAL forme des étudiants issus de filières informatiques à un ensemble de techniques d'apprentissage automatique et de traitement automatique de la langue qui sont au c?ur des applications en ingénierie des langues telles que la traduction automatique, la fouille d?opinions, la recherche d?information, la reconnaissance de la parole et du locuteur? Il s'agit donc de former des étudiants hautement spécialisés qui seront capables de mettre en ?uvre des applications prenant en compte des masses de données complexes et hétérogènes. Au terme de la formation les étudiants seront reconnus comme DataScientist, Chef de projet en ressources linguistiques, Cadre en technologies et services de l?information? La formation s?appuie sur des chercheurs issus des laboratoires du LS2N (Laboratoire des Sciences du Numérique de Nantes) et du LIUM (Laboratoire d'Informatique de l'Université du Maine) et sur des acteurs économiques dont les applications nécessitent des connaissances sur le traitement de données langagières. En outre, la formation est très ancrée dans son écosystème régional et les étudiants seront invités à participer à des Meetup et sensibilisés au monde de l?entrepreneuriat. Il est possible d?accéder à la formation en M1 comme en M2 selon les acquis du candidat. - le M1 peut être indifféremment réalisé au Mans ou à Nantes selon la préférence de l?étudiant.
- l?ensemble des cours du M2 sont mutualisés entre les Universités du Maine et de Nantes et l?étudiant peut librement s?inscrire au Mans ou à Nantes. Le M2 peut être réalisée en présentiel ou en alternance.
Information ---------------- - Nantes : http://www.master-info.univ-nantes.fr/00542841/0/fiche___pagelibre/&RH=1403710895111 - Le Mans: http://www-info.univ-lemans.fr/?page_id=10 Modalités d?accès -------------------------- - Nantes : http://www.sciences-techniques.univ-nantes.fr/72621571/0/fiche___pagelibre/ - Le Mans : http://www-info.univ-lemans.fr/?page_id=211 Contacts ------------- - Nantes : Emmanuel.Morin@univ-nantes.fr - Le Mans : Yannick.Esteve@univ-lemans.fr
| ||
4-3 | Enquête AFCP: Ecole d'hiver 2018 sur l'analyse statistique des données phonétiques L'AFCP (Association Francophone de la Communication Parlée) souhaite organiser une école d'Hiver en Janvier 2018 sur le thème de l'analyse statistique des données phonétiques. Nous menons une enquête pour évaluer la population intéressée par cette proposition. Si c'est votre cas, merci de prendre 2mn pour remplir le questionnaire suivant :
| ||
4-4 | Creation of Yajie Miao Memorial Student Travel Grants Creation of Yajie Miao Memorial Student Travel Grants
As many readers might already know, Yajie Miao, a PhD student at Carnegie Mellon?s Language Technology Institute, successfully defended his thesis on ?Incorporating Context Information into Deep Neural Network Acoustic Models? in August 2016.
He had accepted a position at Microsoft in Redmond, and was set to start work there in October 2016. It is with a heavy heart that we announce that he died tragically, while visiting his family in China, before he was able to do so.
In fond memory of Yajie and his work, his colleagues and friends at Carnegie Mellon and Microsoft, in consultation with his family, have decided to set up a Memorial Student Travel Grant, which will support additional student travel to Interspeech and other speech conferences in the coming years.
More information on Yajie and the opportunity to support these travel grants can be found at https://www.youcaring.com/iscainternationalspeechcommunicationassociation-815026.
| ||
4-5 | Nominations for the prize Antonio Zampolli In 2004, the ELRA Board has created a prize to honour the memory of its first President, Professor Antonio Zampolli, a pioneer and visionary scientist who was internationally recognized in the field of Computational Linguistics and Human Language Technologies (HLT). He also contributed much through the establishment of ELRA and the LREC conference. To reflect Professor Zampolli's specific interest in our field, the ELRA Antonio Zampolli Prize is awarded to individuals and small groups whose work lies within the areas of Language Resources and Language Technology Evaluation with acknowledged contributions to their advancements. Nominations should be sent to the ELRA President Henk van den Heuvel at AntonioZampolli-Prize@elra.info no later than February 1st, 2018.
| ||
4-6 | Bids for ICMI 2019 and ICMI 2020 ACM International Conference on Multimodal Interaction (ICMI) is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
The Steering Board of the ACM International Conference on Multimodal Interaction (ICMI) invites proposals to host either
Strong proposals from other regions are also welcome. ICMI 2016 was in Tokyo, Japan, ICMI 2017 was in Glasgow, UK and ICMI 2018 will be in Boulder, Colorado, USA. Please see the attached document for more details about ICMI bid process.
Important Dates
For ICMI 2019 bids:
For ICMI 2020 bids:
ICMI 2016, 2017 and 2018 websites
All communications, including request for information and bid submission, should be sent to the ICMI Steering Board Chair (Louis-Philippe Morency, morency@cs.cmu.edu).
| ||
4-7 | Air Traffic Control Speech Recognition Challenge Air Traffic Control Speech Recognition Challenge
AIRBUS, in collaboration with IRIT and SAFETY DATA-CFH, is launching a challenge on automatic speech recognition of English Air Traffic Control (ATC) communications to:
Why is it challenging? ATC audio is very noisy, made of non-native speech, with code switching, high speech rate and lots of domain-specific vocabulary -- see https://www.youtube.com/watch?v=sxEfwSNBgNU&t=80s for example.
What are the tasks?
What is the dataset? 40h of transcribed real-life non-native English ATC data + 5h for validation/leader board + 5h for final evaluation (you can use your own additional data)
What is to win? #1 will be invited to speak at the challenge workshop and to visit Airbus facility and simulators. 5 of the best challengers will be retained for a second internal selection aiming to choose Airbus R&T partner for developing future on-board speech-to-text solutions.
Who can participate? Anyone: start-ups, medium & big companies, research labs. You can participate in the challenge even if you are not considering a partnership with AIRBUS or if you are planning to do only one of the tasks.
Important dates Declaration of intent: Now, write to atc-challenge@airbus.com Registration open: Mon 5th February, via AIRBUS challenge platform https://airbusaigym.nova.airbusdefenceandspace.com/ Training data available + leader board: Mon 5th March - Sun 29th April (8 weeks) Release of evaluation data: Wed 2ndMay Results submission: Wed 9thMay Final workshop: between Mon 18th and Fri 22nd June (date to be confirmed) Special session at international conference for publication: pending approval (submitted)
More questions? Post to the challenge forum or write to atc-challenge@airbus.com.
| ||
4-8 | Bids for ICASSP 2023PREPARE YOUR PROPOSAL FOR ICASSP 2023Deadline Extended to 27 April 2018IEEE Signal Processing Society is accepting proposals in all regions for the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP).
If you are interested in submitting a proposal, please review the proposal guidelines. The next step is to send a notice of intent, containing the proposed dates and location along with your contact information, to the VP-Conferences and SPS Staff at sps-conf-proposals@ieee.org.
| ||
4-9 | CfProposals for ASRU 2019Call for Proposals - ASRU 2019Jason Williams, Raul Fernandez, Tim Fingscheidt, Kai Yu Following on the success of the bi-annual ASRU workshop over the past few decades, the IEEE Speech and Language Technical Committee invites proposals to host the 2019 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU-2019). Past ASRU workshops have fostered a collegiate atmosphere through a thoughtful selection of venues, thus offering a unique opportunity for researchers to interact and learn. The proposal should include the information outlined below.
The deadline for proposals is Friday, June 1, 2018. Send proposals and questions to the workshop sub-committee: Jason Williams (jason.williams@microsoft.com), Raul Fernandez (fernanra@us.ibm.com), Tim Fingscheidt (t.fingscheidt@tu-bs.de), and Kai Yu (kai.yu@sjtu.edu.cn). In June, the IEEE SLTC will review proposals, and selection results are expected by July 1, 2018. If you are interested in submitting a proposal, we encourage you to contact the workshop sub-committee in advance of submitting a proposal. They can provide an example of a past successful proposal and example budget. Further, proposers who make contact before Friday 6 April 2018 may be invited to briefly present in-person at the annual IEEE SLTC meeting at ICASSP 2018, 15-20 April 2018, in Calgary, Alberta, Canada, to obtain feedback from the SLTC (https://2018.ieeeicassp.org/). Presentations should be reasonably specific but need not be complete. Note that the SLTC does not have funding available for travel to ICASSP. The organizers of the ASRU workshop do not have to be SLTC members, and we encourage submissions from all potential organizers. IEEE SLTC members are welcome to participate on proposals, and the organizing committees of past ASRU events have included many SLTC members. To maintain fairness of selection, SLTC members who are affiliated with an ASRU 2019 proposal will not participate in the ASRU 2019 selection vote. Further, the members of the workshops sub-committee may not be affiliated with any ASRU 2019 proposal. Please feel free to distribute this call for proposals far and wide, and invite members of the speech and language community at large to submit a proposal to organize the next ASRU workshop. For more information on the most recent workshops, please see: https://asru2017.org for information about ASRU 2017 in Okinawa, Japan http://asru2015.org for information about ASRU 2015 in Scottsdale, Arizona http://asru2013.org for information about ASRU 2013 in Olomouc, Czech Republic And feel free to contact the workshops sub-committee with questions. Jason Williams (jason.williams@microsoft.com) Raul Fernandez (fernanra@us.ibm.com) Tim Fingscheidt (t.fingscheidt@tu-bs.de) Kai Yu (kai.yu@sjtu.edu.cn)
| ||
4-10 | ELRA and LDC partner on a joint distribution of Language Resources Press Release - Immediate
|