ISCA - International Speech
Communication Association


ISCApad Archive  »  2017  »  ISCApad #223  »  Academic and Industry Notes

ISCApad #223

Saturday, January 14, 2017 by Chris Wellekens

4 Academic and Industry Notes
4-1Carnegie Speech

 

Carnegie Speech produces systems to teach people how to speak another language understandably. Some of its products include NativeAccent, SpeakIraqi, SpeakRussian, and ClimbLevel4. You can find out more at

www.carnegiespeech.com. You can also read about Forbes.com awarding it a Best Breakout Idea of 2009 at:

http://www.forbes.com/2009/12/21/best-breakout-ideas-2009-entrepreneurs-technology-breakout_slide_11.html

Top

4-2Research in Interactive Virtual Experiences at USC CA USA

REU Site: Research in Interactive Virtual Experiences

--------------------------------------------------------------------

 

The Institute for Creative Technologies (ICT) offers a 10-week summer research program for undergraduates in interactive virtual experiences. A multidisciplinary research institute affiliated with the University of Southern California, the ICT was established in 1999 to combine leading academic researchers in computing with the creative talents of Hollywood and the video game industry. Having grown to encompass a total of 170 faculty, staff, and students in a diverse array of fields, the ICT represents a unique interdisciplinary community brought together with a core unifying mission: advancing the state-of-the-art for creating virtual reality experiences so compelling that people will react as if they were real.

 

Reflecting the interdisciplinary nature of ICT research, we welcome applications from students in computer science, as well as many other fields, such as psychology, art/animation, interactive media, linguistics, and communications. Undergraduates will join a team of students, research staff, and faculty in one of several labs focusing on different aspects of interactive virtual experiences. In addition to participating in seminars and social events, students will also prepare a final written report and present their projects to the rest of the institute at the end of summer research fair.

 

Students will receive $5000 over ten weeks, plus an additional $2800 stipend for housing and living expenses.  Non-local students can also be reimbursed for travel up to $600.  The ICT is located in West Los Angeles, just north of LAX and only 10 minutes from the beach.

 

This Research Experiences for Undergraduates (REU) site is supported by a grant from the National Science Foundation. The site is expected to begin summer 2013, pending final award issuance.

 

Students can apply online at: http://ict.usc.edu/reu/

Application deadline: March 31, 2013

 

For more information, please contact Evan Suma at reu@ict.usc.edu.

Top

4-3Announcing the Master of Science in Intelligent Information Systems

Carnegie Mellon University

 

degree designed for students who want to rapidly master advanced content-analysis, mining, and intelligent information technologies prior to beginning or resuming leadership careers in industry and government. Just over half of the curriculum consists of graduate courses. The remainder provides direct, hands-on, project-oriented experience working closely with CMU faculty to build systems and solve problems using state-of-the-art algorithms, techniques, tools, and datasets. A typical MIIS student completes the program in one year (12 months) of full-time study at the Pittsburgh campus.  Part-time and distance education options are available to students employed at affiliated companies. The application deadline for the Fall 2013 term is December 14, 2012. For more information about the program, please visit http://www.lti.cs.cmu.edu/education/msiis/overview.shtml

Top

4-4Master in linguistics (Aix-Marseille) France

Master's in Linguistics (Aix-Marseille Université): Linguistic Theories, Field Linguistics and Experimentation TheLiTEx offers advanced training in Linguistics. This specialty focuses Linguistics is aimed at presenting in an original way the links between corpus linguistics and scientific experimentation on the one hand and laboratory and field methodologies on the other. On the basis of a common set of courses (offered within the first year), TheLiTEx offers two paths: Experimental Linguistics (LEx) and Language Contact & Typology (LCT) The goal of LEx is the study of language, speech and discourse on the basis of scientific experimentation, quantitative modeling of linguistic phenomena and behavior. It focuses on a multidisciplinary approach which borrows its methodologies to human physical and biological sciences and its tools to computer science, clinical approaches, engineering etc.. Among the courses offered: semantics, phonetics / phonology, morphology, syntax or pragmatics, prosody and intonation, and the interfaces between these linguistic levels, in their interactions with the real world and the individual, in a biological, cognitive and social perspective. Within the second year, a set of more specialized courses is offered such as Language and the Brain and Laboratory Phonology. LCT aims at understanding the world's linguistic diversity, focusing on language contact, language change and variation (European, Asian and African languages, Creoles, sign language, etc.).. This specialty focuses, from a a linguistic and sociolinguistic perspective, on issues of field linguistics and taking into account both the human and socio-cultural dimension of language (speakers, communities). It also focuses on documenting rare and endangered languages and to engage a reflection on linguistic minorities. This path also provides expertise and intervention models (language policy and planning) in order  to train students in the management of contact phenomena and their impact on the speakers, languages and societies More info at: http://thelitex.hypotheses.org/678

Top

4-5NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

NEW MASTER IN BRAIN AND COGNITION AT UNIVERSITAT POMPEU FABRA, BARCELONA

A new, one-year Master in Brain and Cognition will begin its activities in the Academic Year 2014-15 in Barcelona, Spain, organized by the Universitat Pompeu Fabra (http://www.upf.edu/mbc/).

The core of the master's programme is composed of the research groups at UPF's Center for Brain and Cognition  (http://cbc.upf.edu). These groups are directed by renowned scientists in areas such as computational neuroscience, cognitive neuroscience, psycholinguistics, vision, multisensory perception, human development and comparative cognition. Students will  be exposed to the ongoing research projects at the Center for Brain and Cognition and will be integrated in one of its main research lines, where they will conduct original research for their final project.

Application period is now open. Please visit the Master web page or contact luca.bonatti@upf.edu for further information.

Top

4-6Masters à la Sorbonne (Paris)

Les masters d'Ingénierie de la langue de Paris-Sorbonne, ILGII (R) et IILGI (P), sont maintenant regroupés dans une seule spécialité de la mention Littérature, Philosophie, Linguistique.
Les deux années du master Langue et Informatique apportent des connaissances fondamentales sur la langue et son traitement automatique, sur les interactions langagières et la modélisation des phénomènes paralangagiers, ainsi que sur l'ingénierie des connaissances. Les enseignements de spécialité développent également des savoirs et des savoir-faire : analyse et compréhension de textes ; reconnaissance et synthèse de la parole ; sciences affectives et systèmes de dialogue ; résumé et traduction assistés par ordinateur; extraction et construction des connaissances ; intelligence économique. Les enseignements méthodologiques du tronc commun de la mention permettent d'articuler ces enseignements spécialisés avec ce qui relève de l'épistémologie de la littérature, de la philologie et de la linguistique. Ce master comporte deux parcours : un parcours professionnel « Ingénierie de la Langue pour la Société Numérique (ILSN) »  et un parcours recherche « Informatique, Langue et Interactions (ILI) ». La différenciation entre les deux parcours se fait au semestre 4.

Contacter Claude.Montacie@paris-sorbonne.fr

Top

4-7New Masters in Machine Learning, Speech and Language Processing at Cambridge University, UK
New Masters in Machine Learning, Speech and Language Processing
 
This is a new twelve-month full-time MPhil programme offered by the Computational and Biological Learning Group (CBL) and the Speech Group in the Cambridge University Department of Engineering, with a unique, joint emphasis on both machine learning and on speech and language technology. The course aims: to teach the state of the art in machine learning, speech and language processing; to give students the skills and expertise necessary to take leading roles in industry; to equip students with the research skills necessary for doctoral study.
 
UK and EU students applications should be completed by 9 January 2015 for admission in October 2015. A limited number of studentships may be available for exceptional UK and eligible EU applicants. 

Self-funding students who do not wish to be considered for support from the Cambridge Trusts have until 30 June 2015 to submit their complete applications.

More information about the course can be found here: http://www.mlsalt.eng.cam.ac.uk/


Top

4-8The International Standard Language Resource Number (ISLRN)

The Resource Management Agency (RMA), an important language resource player in South Africa, adopts the International Standard Language Resource Number (ISLRN) initiative

 

The RMA is now a certified provider to the ISLRN system. This means that the RMA can apply for ISLRNs on behalf of the developers of the data that is managed and distributed via the RMA website. The RMA has already submitted 117 language resources to the ISLRN, including language resources for the 11 official languages of South Africa. These include text and speech resources such as text corpora (annotated, genre classification, parallel), translation memories, custom dictionaries for government domain, compound semantic and splitting datasets, frequency word lists, speech corpora, and pronunciation dictionaries. The meta-information for these language resources is also available on the ISLRN website with a broad international audience.

Background

As part of an international effort to document and archive the various language resource development efforts around the world, a system of assigning ISLRNs was established in November 2013. The ISLRN is a unique ?persistent identifier? to be assigned to each language resource. The establishment of ISLRNs was a major step in the networked and shared world of human language technologies. Unique resources must be identified as they are, and meta-catalogues require a common identification format to manage data correctly. Therefore, language resources should carry identical identification schemes independent of their representations, whatever their types and wherever their physical locations (on hard drives, internet or intranet) (http://islrn.org/).

 

About RMA: The Department of Arts and Culture of South Africa established the RMA to manage and distribute reusable text and speech resources developed by the National Centre for Human Language Technology from a centralised location. As many of the South African languages are deemed resource-scarce, the RMA aspires to make data resources for these languages more readily available.

To find out more about RMA, please visit the RMA website: http://rma.nwu.ac.za.

 

About ELRA: The European Language Resources Association (ELRA) is a non-profit-making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting human language technologies.

To find out more about ELRA, please visit the website: http://www.elra.info


Contact: info@elda.org

Top

4-9SProSIG Bids for Speech prosody 2018

SProSIG

 

The purpose of the Speech Prosody Special Interest Group (SProSIG) is to promote interest in Speech Prosody; to provide a means of exchanging news of recent research developments and other matters of interest in Speech Prosody; to sponsor meetings and workshops in Speech Prosody that appear to be timely and worthwhile; and to provide and make available resources relevant to Speech Prosody.  SProSIG is a special interest group of ISCA, and of IPA. Our web page is http://sprosig.org.

 

Membership in SProSIG is obtained by signing up for the mailing list.  The mailing list is currently housed at https://lists.illinois.edu/lists/info/sprosig.

 

All members of SProSIG are allowed to vote on the location of the Speech Prosody conference.  Bids for Speech Prosody 2018 will be presented orally at Speech Prosody 2016, and in written form during June 2016.

 

SProSIG is administered by officers under the direction of a Permanent Advisory Committee (PAC).  Officers are nominated biennially in August, and elected in September.  Current officers are Keikichi Hirose, Mark Hasegawa-Johnson, Hansjörg Mixdorff and Yi Xu.

 

The founding officers of SProSIG specified services to members including dedicated web pages, an e-mail newsletter, a bibliographic database, workshops and special sessions, and the organization of the international conference Speech Prosody.  The web page has been little updated since 2012, and the newsletter has been dormant far longer; it is our intention to revise both.  Suggestions about content and frequency are welcome, especially if delivered in a friendly tone of voice to any current officer or PAC member at Speech Prosody 2016.

Call for Bids for the hosting of SP9: Speech Prosody 2018

 

Members of SProSIG with a history of attendance at Speech Prosody conferences are encouraged to submit bids to host SP9: Speech Prosody 2018.  Written bids must be submitted by July 15, 2016 to the SProSIG Secretary, Mark Hasegawa-Johnson, at jhasegaw@illinois.edu.  All written bids received by that date will be posted at http://sprosig.org.  The full membership of SProSIG will then be invited to read the written bids, and an on-line vote will be held to determine the location of SP9.  A written bid may contain any information that you believe is likely to sway the members of SProSIG, but must contain at least the following information:

 

City and Country in which the conference will be held: 

 

General Chair (Name, Affiliation, and a list of Speech Prosody conferences that he or she has attended):

 

Organizing Committee Members (Same information as above):

 

Proposed conference period: DD/MM/YYYY – DD/MM/YYYY

 

Expected early registration fee for ISCA members:

 

Contractor (University, Company, and/or Contractor organizing the conference; this can be changed later if necessary):

 

Venue (name of hotel, conference center, university etc.  Can be changed later if it necessary):

 

Access to the venue from the closest major airport  (Is it easy for participants to reach the venue?):

 

Accommodation (a rough idea on number of near-by hotels and their prices.  If organizers plan to offer university dormitories for participants, please mention with some information.):

 

Scientific Theme of Speech Prosody 2018 (if any):

 

Other points to be emphasized (if any):

 

 

Top

4-10To ISCA Members interested in Code-Switching

To ISCA Members interested in Code-Switching

 

We are asking for input on researchers’ interest and engagement in computational approaches to linguistic Code-Switching in any language pair, modality, or genre. A brief survey can be found at the link below. Thanks much for your help!

 

https://docs.google.com/forms/u/0/d/1ARm04N_si_7VaMPjtbWOFUUjxJZm7TQmjgNSHcUNPcw

 

Mona Diab

Julia Hirschberg

Thamar Solorio

Top

4-11Participating to the Singing Synthesis Challenge

Your participation to the Singing Synthesis Challenge listening test would be much appreciated.
You are kindly asked to rate the quality of songs produced by singing synthesis systems.
Please take the test at the following address:

https://enquete.limsi.fr/index.php/778377

The test will take no more than about 10-15 mn.

Feel free to disseminate !
Many thanks for participation,


Christophe d'Alessandro

Top

4-12New funding opportunity at iARPA
Dear Speech Scientist:
IARPA would like to announce a new funding opportunity involving speech recognition, information retrieval, summarization, domain adaptation and machine translation of low resource languages -- the forthcoming MATERIAL Program.

A Proposers' Day for MATERIAL will occur in the DC area on Sept. 27, 2016. A formal solicitation for proposals is expected to follow the Proposers' Day. Please note that registration for this event closes on Sept. 20.

To register for this event, please visit:

https://www.fbo.gov/index?s=opportunity&mode=form&id=b9fe325434c8c668b66b7499cf435b85&tab=core&_cview=0

BRIEF PROGRAM DESCRIPTION AND GOALS


The MATERIAL performers will develop an 'English-in, English-out' information retrieval system that, given a domain-sensitive English query, will retrieve relevant speech and text data from a large multilingual repository and display the retrieved information in English in a summary format. MATERIAL queries will consist of two parts: a domain specification and an English word (or string of words) that capture the information need of an English-speaking user, e.g., 'zika virus' in the domain of GOVERNMENT vs. 'zika virus' in the domain of HEALTH, or 'asperger's syndrome' in the domain of EDUCATION vs. 'asperger's syndrome' in the domain of SCIENCE. The English summaries produced by the system should convey the relevance of the retrieved information to the domain-limited query to enable an English-speaking user to determine whether the document meets the information needs of the query.

Current methods to produce similar technologies require a substantial investment in training data and/or language specific development and expertise, entailing many months or years of development. A goal of this program is to drastically decrease the time and data needed to field systems capable of fulfilling an English-in, English out task. Limited machine translation and automatic speech recognition training data will be provided from multiple low resource languages to enable performers to learn how to quickly adapt their methods to a wide variety of materials in various genres and domains. As the program progresses, performers will apply and adapt these methods in increasingly shortened time frames to new languages. Program data will include formal and informal genres of text and speech which will not be fully captured by the training data. Image and video are out of scope for this program.

Performers will be evaluated, relative to a baseline system, on their ability to accurately retrieve text and speech materials relevant to an English domain-specific query from a database of multi-domain, multi-genre documents in a low resource language, and their ability to convey the relevance of those documents through summaries presented to English speaking domain experts.

To develop such an end-to-end system, large multi-disciplinary teams will be required with expertise in a number of relevant technical areas including, but not limited to, natural language processing, low resource languages, machine translation, corpora analysis, domain adaptation, computational linguistics, speech recognition, language identification, semantics, summarization, information retrieval, and machine learning. Since language-independent approaches with quick ramp up time are sought, foreign language expertise in the languages of the program is not expected. IARPA anticipates that universities and companies from around the world will participate in this research program. Researchers will be encouraged to publish their findings in publicly-available, academic journals.

 

For updated information on the program, please visit:

https://www.iarpa.gov/index.php/research-programs/material

Top

4-13msg from ELRA: CEF Automated Translation call for proposals (CEF-TC-2016-3)

CEF Automated Translation call for proposals (CEF-TC-2016-3) will be launched on 20 September 2016 with a closing date on 15 December 2016.

The call is based on CEF work programme 2016 available here: https://ec.europa.eu/inea/sites/inea/files/wp2016_adopted_20160303.pdf


In the upcoming CEF Automated Translation call 6.5 MEUR are available for collaborative projects on:

1) stimulating language resource provision to CEF.AT, and

2) integration of the CEF Automated Translation into (multilingual, cross-border) digital services.

 
A Virtual Info Day on the call will take place on Thursday 22 September 2016.

You can find more information about the Virtual Info Day here: https://ec.europa.eu/inea/en/news-events/events/2016-3-cef-telecom-calls-virtual-info-day
No prior registration is needed. The link to the webstreaming will be provided 48 hours prior to the event.

Questions on the call can be sent ahead and during the event to INEA-CEF-Telecoms-Infoday@ec.europa.eu. They will be answered during the Info Day.

The event will be tweeted live from @inea_eu with the hashtag #CEFTelecomDay.

Finally, a LinkedIn group has been created with the aim to help potential stakeholders to find partners for their CEF Telecom consortia in any of the calls.

Best regards,

Aleksandra Wesolowska

Top

4-14Theory of Musical Equalibration

We would like to inform you about the 'Theory of Musical Equalibration' that describes the relationship between chords and their emotional impact. If interested, we could introduce the subject at the International Speech Communications Association . 

 

Last year we presented the topic at the 'Croatian Days of Music Theory' in Zagreb:

http://hdgt.hr/?page_id=9 ,

 

at the 'Israel Musicological Society' in Tel Aviv: http://media.wix.com/ugd/510480_4e3aa5de255a4868b0a86de92b7fd15e.pdf

 

and at the Centre for Systematic Musicology, University of Graz: https://www.homepage.uni-graz.at/de/richard.parncutt/research/weekly-seminar/ 

 

You can download the English translation of our book 'Music and Emotions - Research on the Theory of Musical Equilibration (die Strebetendenz-Theorie)' for free:  http://www.willimekmusic.de/music-and-emotions.pdf

 

or our article 'Why do minor chords sound sad?' in the  Journal for Psychology & Psychotherapy: Journal-of-psychology-and-psychotherapy

 

The 'Theory of Musical Equilibration' is also discussed in the discussion page of the Society for Music Theory: smt discuss

 

Thank you for your interest. We're looking forward to hearing from you.

  

Best regards

 

Daniela Willimek, University of Music Karlsruhe, and Bernd Willimek, music theorist.

Top

4-15Bids for ACM International Conference on Multimodal Interaction (ICMI) 2018

ACM International Conference on Multimodal Interaction (ICMI) is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.

 

              ICMI Steering Board invites proposals to host the 20th Annual Conference on Multimodal Interaction (ICMI 2018), to be held in a between the end of September and mid-November 2018. We seek preliminary draft proposals from bidders in the North or South American continents (ICMI 2016 is in Tokyo, Japan and ICMI 2017 will be in Glasgow, UK), although strong proposals from other regions are also welcome. Promising bidders will be asked to provide additional information for the final selection.

 

              Please see the attached document for more details about ICMI 2018 bid process.

 

Important Dates

 

  • December 21, 2016 - Notify intention to submit proposal
  • January 27, 2017 - Draft proposals due
  • February 10, 2017 - Feedback to bidders
  • February 24, 2017 - Final bids due
  • March 17, 2017 - Bid selected

 

ICMI 2016 and 2017 websites

              https://icmi.acm.org/2016/

              https://icmi.acm.org/2017/

 

              All communications, including request for information and bid submission, should be sent to the ICMI Steering Board Chair (Louis-Philippe Morency, morency@cs.cmu.edu).

 

Best wishes,

Louis-Philippe Morency

Assistant Professor, Carnegie Mellon University

Director, Multimodal Communication and Machine Learning Laboratory

Chair, ACM ICMI Steering Board

https://www.cs.cmu.edu/~morency/

Top

4-16ACM ICMI CALL FOR BIDS 2018

ACM ICMI CALL FOR BIDS 2018

The Steering Board of the ACM International Conference on Multimodal Interaction (ICMI) invites proposals to host the 20th Annual Conference, to be held in a between the end of September and mid-November 2018. We seek preliminary draft proposals from bidders in the North or South American continents (ICMI 2016 is in Tokyo, Japan and ICMI 2017 will be in Glasgow, UK), although strong proposals from other regions are also welcome. Promising bidders will be asked to provide additional information for the final selection.

Evaluation

Proposals will be evaluated according to the following criteria (unordered):

Experience and reputation of General Chairs and Program Chairs

Local multimodal interaction community support

(Local) government and industry support

Support and opportunities for students

Accessibility and attractiveness of proposed site

Suitability of proposed dates (with list of specific conflicts to avoid)

Adequacy of conference facilities for the anticipated number of attendees

Adequacy of accommodations and food services in a range of price categories and close to the conference facilities

Overall balance of budget projections

Geographical balance with regard to previous ICMI meetings

 

All communications, including request for information and bid submission, should be sent to the ICMI Steering Board Chair (Louis-Philippe Morency, morency@cs.cmu.edu).

Important Dates

December 21, 2016 - Notify intention to submit proposal

January 27, 2017 - Draft proposals due

February 10, 2017 - Feedback to bidders

February 24, 2017 - Final bids due

March 17, 2015 - Bid selected

 

Bid Content

The following questions have to be answered for the official bid (both draft and final proposals). Text in square brackets [] contains considerations to be taken into account.

1. Describe briefly the conference, including side events

2. Describe briefly the conference site.

3. What date do you consider?

4. What is the nearest (international) airport?

5. Please give price quotes for the cheapest roundtrip to the conference location from Frankfurt, London, New York, San Francisco, Beijing and Tokyo (assume one week of travel with a Saturday overnight stay)

6. What transportation should participants use from the airport to the conference site?

7. Does the conference site both have a large room for a maximum of 300 people and about 5 smaller rooms for a maximum of 30-70 people? Is there wireless connection available for attendees? What about audio-visual facilities?

8. What is the approximate room rate (single and double occupancy)? Is breakfast included? [Often all the attendees of ICMI stay at the same hotel. If this is your case, the conference chair should find a hotel that allows accommodation for the expected number of people. Booking rooms and meals in the same hotel as the conference rooms often helps reducing the overall costs.]

9. Catering, including breaks, receptions, banquet and entertainment. We encourage organizers to provide coffee breaks and lunches in order to promote community building and discussion

10. Which support can your department give for the organization of the conference (e.g., free secretarial assistance, facilities for on-line payment?

11. Which support can your department give during the conference (e.g., free secretarial assistance, PCs / Macs at the conference site)?

12. What are your plans for sponsorship? To which associations / companies / institutions do you plan to apply for financial assistance? What do you realistically expect to receive from them? What are your plans concerning student travel stipend program [A minimum of $5,000 should be reserved from each year's conference budget to support student travel from each of the three major geographic regions (Americas, Europe-Africa, Asia-Pacific), or $15,000 total. For example, if a grant for $15,000 is obtained from NSF to support U.S. student travel but there is no external funding for students from other continents, then an additional $10,000 of you budget should be set aside for students from the other two continents]

13. What actions will you take to make the conference cheaper for students? (e.g., seek financial support from other organizations, provide cheaper rooms)? What reduction do you realistically expect?

14. Provide the names of people who are foreseen or confirmed for the major Conference Committees:

General Chairs, Program Chairs, Sponsorship Chair; volunteer labor, registration handling. Describe any experience the team has had in organizing previous conferences and the number of participants at those conferences

15. Local Multimodal Interaction community

16. How do you propose to run the paper reviewing process? Do you see any possible improvements?

17. How will you organize the content of the conference to ensure a high-quality and energetic exchange of information that includes timely topics and stimulating external speakers? Please be specific in your suggestions for how you would organize the main program and workshops/tutorials

18. Any other aspects that you may find relevant for the evaluation of your proposal

Preparing a budget proposal

Based on estimates from previous attendance, one might expect 200 participants to ICMI. Please, provide two budgets, one for 150 participants and the other for 200. Costs that will have to be covered include:

Rental of conference space and meeting rooms

AV equipment

Coffee breaks and possibly lunch

Registration desk/technical helpers (e.g., student volunteers)

Tutorials

Producing and printing the proceedings

ACM 18% contribution and contingency fund

Conference poster and advertising

Social dinner

Welcome reception

Dinner/lunch for ICMI board meeting

 

Top

4-17Prix de thèse AFCP 2016 - 2017


 APPEL A CANDIDATS

Prix de thèse AFCP 2016 - 2017

Depuis 2004, l'Association Francophone de la Communication Parlée (AFCP) décerne tous les
deux ans un prix scientifique pour une excellente thèse francophone du domaine (ou
anglophone issue d?un laboratoire francophone), afin de promouvoir les recherches en
communication parlée, fondamentales ou appliquées, en STIC, SHS ou SDV. Ce prix permet de
soutenir et diffuser les travaux de jeunes chercheurs du domaine.

Le prix est décerné par un jury composé des chercheurs élus du CA de l?AFCP. Il sera
officiellement remis lors des prochaines Journées d?Etudes sur la Parole, en 2018.

Le lauréat se verra remettre la somme de 500 euros et sera invité à présenter ses travaux
à la communauté de la communication parlée lors des JEPs 2018 (inscription offerte à la
conférence). Il se verra également offrir l?opportunité de publier  sa thèse sous la
forme d?un livre dans la collection « Parole » (éditions CIPA).

*** ATTENTION : changement de périodicité du Prix de Thèse AFCP ***
Depuis l?édition précédente, le prix de thèse est attribué tous les deux ans. Le prix de
thèse 2016-2017 sera attribué à une thèse soutenue entre le 1er janvier 2016 et le 31
décembre 2017, soit sur une période de 2 ans. Mais, dans un souci de comparaison
équitable des dossiers, l?AFCP conserve le principe d?un appel à candidature par an. Le
présent appel vaut pour les thèses soutenues entre le 1er janvier 2016 et le 31 décembre
2016.

CALENDRIER
Date limite de dépôt du dossier : 31 janvier 2017
Décision du jury AFCP : avril-mai 2018
Remise officielle du prix : JEPs 2018

CONDITIONS DE CANDIDATURE A CET APPEL
Peut candidater au présent appel : tout docteur dont la thèse, préparée dans un
laboratoire francophone, et rédigée en français ou en anglais, a été soutenue entre le
1er janvier et le 31 décembre 2016. Toute candidature est limitée à une seule édition du
prix. Seuls les dossiers complets de candidature seront examinés.

CANDIDATURE
Pour candidater, il suffit de :
(1)    Envoyer un e-mail déclarant votre intention de candidater, avec vos
nom, prénom, titre de la thèse, directeur de thèse, et date de la
soutenance à: david.langlois@loria.fr. Vous recevrez un accusé de réception.
(2)    Déposer votre manuscrit de thèse en PDF sur le serveur du site web de l?AFCP :
http://www.afcp-parole.org/spip.php?page=depot_these
(3)    Envoyer votre dossier complet via un courrier électronique à l?adresse
david.langlois@loria.fr. La pièce jointe sera constituée d?un fichier unique (nommé
VOTRENOM.pdf), contenant dans l?ordre : (i) le résumé de la thèse (2 pages) ; (ii) la
liste des publications ; (iii) les rapports scannés du jury et des rapporteurs de
soutenance de thèse ; (iv) une lettre de recommandation scannée du directeur de thèse ;
(v) un CV (avec coordonnées complètes dont e-mail).

_____________Date de dépôt du dossier : 31 janvier 2017____________

Top

4-18ASVspoof 2017 CHALLENGE: Audio replay detection for automatic speaker verification anti-spoofing
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

  ASVspoof 2017 CHALLENGE:
  Audio replay detection for automatic speaker verification anti-spoofing

  http://www.spoofingchallenge.org/

=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

  Are you good at machine learning for audio signals? Are you good at
  discriminating 'fake' signals from authentic ones? Are you looking for new
  audio processing challenges? Do you work in the domain of speaker recognition?
  ASVspoof 2017 challenge might be for you!

CHALLENGE TASK:

  Given a short clip of speech audio, determine whether it contains
  a GENUINE human voice (live recording), or a REPLAY recording (fake).

  You will be provided a development set containing genuine/replay labeled audio
  examples, along with further metadata such as speech content and the devices
  used in the replay recordings. Your task is to develop a system that assigns a
  single 'liveness' or 'genuineness' score value to new audio samples, and to
  execute that system on a set of test files for which the ground truth is not
  provided. We provide a Matlab-based reference baseline method to kick-off
  quickly towards developing your new ideas!

  For more details, refer to the evaluation plan in the website:
  http://www.spoofingchallenge.org/

BACKGROUND:

  The goal of the challenge series is to enhance security of automatic speaker
  verification (ASV) systems from being intentionally circumvented using fake
  recordings, also known as 'spoofing attacks' or 'representation attacks' in
  the context of biometrics. ASVspoof 2017 is a second edition of a challenge
  kicked off in 2015, and the new perspective in ASVspoof 2017 are the replay
  attacks, especially 'unseen' attacks - for instance, containing replay
  environments, devices and speakers that might be very different from those in
  the development data.

  Despite 'ASV' being in the challenge title, you do NOT require knowledge
  of automatic speaker verification: the task is a 'standalone' replay audio
  detection task that can be addressed as a generic acoustic pattern classification
  problem. We welcome as many new ideas to the problem as possible!

SCHEDULE:

  Development data published:   December 23rd, 2016
  Evaluation data published:    February 10, 2017
  Evaluation set scores due:    February 24, 2017
  Results available:            March 3, 2017
  Interspeech paper deadline:   March 14, 2017
  Metadata/keys published:      May 2017
  Interspeech special session:  August 2017

REGISTRATION:

  Send a free-worded e-mail to asvspoof2017@cs.uef.fi
  to register and obtain the dev data.

ORGANIZERS:

  Tomi Kinnunen, University of Eastern Finland, FINLAND
  Nicholas Evans, Eurecom, FRANCE
  Junichi Yamagishi, University of Edinburgh, UK
  Kong Aik Lee, Institute for Infocomm Research, SINGAPORE
  Md Sahidullah, University of Eastern Finland, FINLAND
  Massimiliano Todisco, Eurecom, FRANCE
  Hector Delgado, Eurecom, FRANCE
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA