The workshop on NLP Solutions for Under Resourced Languages NSURL 2019 will be held with ICNLSP 2019 .
ISCApad #252 |
Tuesday, June 11, 2019 by Chris Wellekens |
3-1-1 | (2019-09-11) CfSS SIGDIAL 2019, Stockholm, Sweden SIGDIAL 2019 11-13 September 2019, Stockholm, Sweden Second Call for Special Sessions http://workshops.sigdial.org/conference20
Special Session Submission Deadline: January 28, 2019 Special Session Notification: February 18, 2019 The Special Interest Group on Dialogue and Discourse (SIGDIAL) organizers welcome the submission of special session proposals. A SIGDIAL special session is the length of a regular session at the conference, and may be organized as a poster session, a panel session, a poster session with panel discussion, or an oral presentation session. Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions. The papers submitted to special sessions are handled by the special session organizers, but for the submitted papers to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers. The reviewers for the special session papers will be taken from the SIGDIAL program committee itself, taking into account the suggestions of the session organizers, and the program chairs will make acceptance decisions. In other words, special session organizers decide what appears in the session, while the program chairs decide what appears in the proceedings and the rest of the conference program. We welcome special session proposals on any topic of interest to the discourse and dialogue communities. Topics of interest include, but are not limited to Explainable AI, Evaluation, Annotation, and End-to-end systems. Submissions: Those wishing to organize a special session should prepare a two-page proposal containing: a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate in the session; and a requested format (poster/panel/oral session). These proposals should be sent to conference[at]sigdial.org by the special session proposal deadline. Special session proposals will be reviewed jointly by the general chair and program co-chairs. Links: Those wishing to propose a special session may want to look at some of the sessions organized at recent SIGDIAL meetings. SIGDIAL 2019 Organizing Committee General Chair: Satoshi Nakamura, Nara Institute of Science and Technology, Japan Program Chairs: Milica Gaši , Cambridge University, UK Ingrid Zukerman, Monash University, Australia
Local Chair: Gabriel Skantze, KTH, Sweden
Sponsorship Chair: Mikio Nakano, Honda Research Institute Japan, Japan
Mentoring Chair: Alex Papangelis, Uber AI, USA
Publication Chair: Stefan Ultes, Daimler AG, Germany
Publicity Chair: Koichiro Yoshino, Nara Institute of Science and Technology, Japan
SIGdial President: Jason Williams, Apple, USA
SIGdial Vice President: Kallirroi Georgila, University of Southern California, USA
SIGdial Secretary: Vikram Ramanarayanan, Educational Testing Service (ETS) Research, USA
SIGdial Treasurer: Ethan Selfridge, Interactions, USA
SIGdial President Emeritus: Amanda Stent, Bloomberg, USA
| |||||||
3-1-2 | (2019-09-15) Welcome to INTERSPEECH 2019 (updated)
Welcome to INTERSPEECH 2019 - Willkommen in Graz, Austria, Sept 15-19, 2019
|
Back | Top |
Welcome to INTERSPEECH 2019 - Willkommen in Graz, Austria, Sept 15-19, 2019
https://www.interspeech2019.org
INTERSPEECH is the world?s largest and most comprehensive conferen ce on the science and technology of spoken language processing. INTERSPEECH conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced a pplications. In addition to regular oral and poster sessions, INTERSPEECH 2 019 will feature plenary talks by internationally renowned experts, tutoria ls, survey presentations, special sessions and challenges, show & tell sessions, and exhibits. A number of satellite events will also take place a round INTERSPEECH 2019.
Crossroads of Speech and Language - Our conference theme comprises three key elements important in speech research today: language diversity, diver sity of applications, and diversity of representation. A complete lis t of the scientific areas and topics including special sessions is available at www.interspeech2019.org.
Important Dates Coming Up Soon
Proposals due for Survey Presentations: extended to May 12, 2019
See Call for Survey Presentations at https://www.interspeech2019.org/calls/ surveys/
Paper acceptance/rejection notification: June 17, 2019
Graz Convention Bureau hotel booking deadline: August 1, 2019
Rooms will go fast - don't delay your booking! See https://www.interspeech2019.org/venue_and_travel/accomodation/
INTERSPEECH 2019 is the 20th Annual Conference of the International Speech Communication Association ISCA and this anniversary edition will introduce several innovative features. These innovations will certainly contribute to raise the attractiveness of the conference beyond the high levels developed already over the past two decades:
With the highly appreciated support by our generous sponsors, we are able to lower the fees for both full and student registrations. As ISCA will continue its tradition of supporting a sizeable number of student travel grants, we are confident to raise the number of participants to a new all-time record.
Childcare ? the INTERSPEECH Kids: Bring your family along to Graz, Austria! Childcare will be provided free of charge for conference participants. For terms and conditions, see the conference webpage https://www.intersp eech2019.org/venue_and_travel/childcare/. Furthermore, an INTERSPEECH parenting community will be set up on the collaboration platform Slack. To secure a place for your kids, we recommend to apply for either service at your earliest convenience by sending email to childcare@interspeech2019.org.
For our Show & Tell Demonstrations we have evolved the submission format in order to improve the reviewing process. Therefore, next to a two-page description, we have asked for the upload of a simple video of your demonstration that will help deciding which demonstrations will raise the highest interest at the conference. For more details see https://www.interspeech2019.or g/calls/show_and_tell/.
Survey presentations are a new addition to the technical program. They will be scheduled at the start of suitable oral presentation sessions, and will be allocated a 40-minute time slot for presentation and discussion. Presentations should aim to give an overview of the state-of-the-art for a specific topic covered by one or more of the main technical areas of the conference. The presenters will also be invited to submit survey papers to the ISCA supported journals of Computer, Speech and Language and Speech Co mmunication. For further details, see our Call for Survey Presentations https://www.interspeech2019.org/calls/surveys/.
Hackathons will set a stimulating atmosphere for the most creative developers in our community and beyond: through jams and challenges, we will bring together some of our brightest minds, students who are speech science and technology aficionados and who come to Graz as conference participants, as well as students who are representing the highly interdisciplinary community of several partner universities in Graz and beyond. Watch out for details to appear soon on our website and if you are interested in particpating, get in touch with our hackathon chairs at hackathon@interspeech2019.org.
We look forward to receiving your submissions and to your participation in INTERSPEECH 2019! We strive to make this a memorable event for all of you - one highlight to watch out for will be the Dancing INTERSPEECH Soiree, An Austrian Ballroom Extravaganza at Congress Graz. If you have time to arrive a few days earlier, you can enjoy the 'Aufsteirern' festival of Styrian folk culture https://www.aufsteirern.at/aufsteirern/english-information/
and right after the conference the 'Steirische Herbst' takes over - a contemporary arts festival of all genres https://www.steirischerherbst.at/en.
General Chairs:
Gernot Kubin, TU Graz, Austria
Zdravko Kacic, University of Maribor, Slovenia
Technical Chairs:
Thomas Hain, U Sheffield, UK
Björn Schuller, U Augsburg/Imperial College, Germany/UK
Organising Committee Members:
Michiel Bacchiani, Google NY, USA
Gerhard Backfried, Sail Labs Vienna, Austria
Jamilla Balint, TU Graz, Austria
Eugen Brenner, TU Graz, Austria
Mariapaola D?Imperio, Aix Marseille U, France
Dina ElZarka, U Graz, Austria
Tim Fingscheidt, TU Braunschweig, Germany
Anouschka Foltz, U Graz, Austria
Anna Fuchs, AVL Graz, Austria
Panayiotis Georgiou, USC Los Angeles, USA
Franz Graf, Joanneum Research Graz, Austria
Markus Gugatschka, MU Graz, Austria
Martin Hagmüller, TU Graz, Austria
Petra Hödl, U Graz, Austria
Robert Höldrich, KU Graz, Austria
Mario Huemer, JKU Linz, Austria
Dorothea Kolossa, RU Bochum, Germany
Christina Leitner, Joanneum Research Graz, Austria
Stefanie Lindstaedt, KNOW Centre Graz, Austria
Helen Meng, CU Hong Kong, China
Florian Metze, CMU Pittsburgh, USA
Pejman Mowlaee, Widex/TU Graz, Denmark/Austria
Elmar Noeth, FAU Erlangen-Nürnberg, Germany
Franz Pernkopf, TU Graz, Austria
Ingrid Pfandl-Buchegger, U Graz, Austria
Lukas Pfeifenberger, Ognios Salzburg, Austria
Johanna Pirker, TU Graz, Austria
Christoph Prinz, Sail Labs Vienna, Austria
Michael Pucher, ÖAW Vienna, Austria
Philipp Salletmayr, Nuance Vienna, Austria
Barbara Schuppler, TU Graz, Austria
Dagmar Schuller, audEERING, Germany
Jessica Siddins, U Graz, Austria
Wolfgang Wokurek, U Stuttgart, Germany
Kai Yu, Shanghai Jiao Tong University, China
Back | Top |
INTERSPEECH 2019: Call for Survey Presentations=NEW
= NEW NEW! INTERSPEECH 2019: Call for Survey Presentations NEW! Important Dates Proposal submission deadline: extended to Sunday, May 12, 2019 Survey Presentations Interspeech is the annual flagship conference of the International Speech Communication Association (ISCA), which brings together a truly interdisciplinary group of experts from academia and industry to present and discuss the latest research, technology advances and scientific discoveries in a five-day event. As such Interspeech constantly innovates and adapts. Beyond plenary talks, oral and poster presentations recent years have seen new ideas on how to engage with experts and industry. The 20th edition of Interspeech conferences, to take place in Graz, Austria, will introduce a range of new presentation formats. Given the complexity of speech communication science and technology the need for detailed technical review of sub-areas of research has become more critical than ever. We invite proposals for innovative and engaging Research Survey Presentations. The talks are aimed to be scheduled at the start of suitable oral presentation sessions, and will be allocated a 40-minute time slot for presentation and discussion. Presentations should aim to give an overview of the state of the art for a specific topic covered by one or more of the main technical areas of Interspeech 2019, namely
Proposals for Survey Presentations are required to include
Proposals will be evaluated by the technical programme and organising committees for relevance and significance, taking balance across areas and the available presentation slots into account (maximum 10 presentations). The presenters of the Interspeech 2019 survey talks will be invited to submit survey papers to the ISCA supported journals of Computer, Speech and Language and Speech Communication with the aim to be included in a Special Issue on the State of the Art in Speech Science and Technology. Survey presentation proposers are invited to submit a proposal via email to the Technical Program Chairs: tpc-chairs@interspeech2019.org no later thanSunday May12, 2019. Please do not hesitate to contact the technical chairs for any questions that may arise prior to proposal submission. Notification of selection is scheduled for June 17, 2019.
INTERSPEECH 2019 Technical Program Chairs |
Back | Top |
Several excellent initiatives organize satellite workshops around INTERSPEECH 2019 and you can find a list of those approved by ISCA at https://www.interspeech2019.org/program/satellite_events/. Please consider contributing both to the main conference and to these important satellite events.
Back | Top |
We are pleased to announce that ISCA-SAC will host three student events at
Interspeech 2019 in Graz, Austria. In addition to two traditional events:
Students Meet Experts and Doctoral Consortium, this this year we are hosting
a new Mentoring event. This event aims to bring together SAC alumni, students
and experts. You can find the details of all three events here:
https://www.interspeech2019.org/students/student_events/
Back | Top |
Interspeech 2020 |
Back | Top |
INTERSPEECH 2021
Brno, Czech Republic, August 30 - September 3, 2021
Chairs : Hynek Hermansky and Honza Cernocky
22nd INTERSPEECH event
Back | Top |
3-2-1 | (2019-07-06) 5th ISCA Supported Summer School on Speech Signal Processing (S4P-2019), Gandhinagar, India Dear Colleagues,
| |||||
3-2-2 | (2019-09-11) CfSS SIGDIAL 2019, Stockholm, Sweden SIGDIAL 2019 11?13 September 2019, Stockholm, Sweden
Call for Special Sessions http://workshops.sigdial.org/conference20
Special Session Submission Deadline: January 28, 2019 Special Session Notification: February 18, 2019
The Special Interest Group on Discourse and Dialogue (SIGDIAL) organizers welcome the submission of special session proposals. A SIGDIAL special session is the length of a regular session at the conference, and may be organized as a poster session, a panel session, a poster session with panel discussion, or an oral presentation session. Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions.
The papers submitted to special sessions are handled by the special session organizers, but for the submitted papers to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers. The reviewers for the special session papers will be taken from the SIGDIAL program committee itself, taking into account the suggestions of the session organizers, and the program chairs will make acceptance decisions. In other words, special session organizers decide what appears in the session, while the program chairs decide what appears in the proceedings and the rest of the conference program.
We welcome special session proposals on any topic of interest to the discourse and dialogue communities. Topics of interest include, but are not limited to Explainable AI, Evaluation, Annotation, and End?to?end systems.
Submissions: Those wishing to organize a special session should prepare a two-page proposal containing: a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate in the session; and a requested format (poster/panel/oral session).
These proposals should be sent to conference[at]sigdial.org by the special session proposal deadline. Special session proposals will be reviewed jointly by the general chair and program co?chairs.
Links: Those wishing to propose a special session may want to look at some of the sessions organized at recent SIGDIAL meetings. http://www.sigdial.org/workshops/conference19/sessions.htm http://articulab.hcii.cs.cmu.edu/sigdial2016/
SIGDIAL 2019 Organizing Committee
General Chair: Satoshi Nakamura, Nara Institute of Science and Technology, Japan
Program Chairs: Milica Ga?i?, Cambridge University, UK Ingrid Zukerman, Monash University, Australia
Local Chair: Gabriel Skantze, KTH, Sweden
Sponsorship Chair: Mikio Nakano, Honda Research Institute Japan, Japan
Mentoring Chair: Alex Papangelis, Uber AI, USA
Publication Chair: Stefan Ultes, Daimler AG, Germany
Publicity Chair: Koichiro Yoshino, Nara Institute of Science and Technology, Japan
SIGdial President: Jason Williams, Apple, USA
SIGdial Vice President: Kallirroi Georgila, University of Southern California, USA
SIGdial Secretary: Vikram Ramanarayanan, Educational Testing Service (ETS) Research, USA
SIGdial Treasurer: Ethan Selfridge, Interactions, USA
SIGdial President Emeritus: Amanda Stent, Bloomberg, USA
| |||||
3-2-3 | (2019-09-11)The 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2019), KTH Royal Institute of Technology, Stockholm, Sweden. CALL FOR PAPERS SIGDIAL 2019 CONFERENCE September 11-13, 2019 http://www.sigdial.org/workshops/conference20/
| |||||
3-2-4 | (2019-09-13) HSCR19 - The 3rd International Workshop on the HISTORY OF SPEECH COMMUNICATION RESEARCH, Vienna, Austria HSCR19 - The 3rd International Workshop on the HISTORY OF SPEECH COMMUNICATION RESEARCH 13-14 September, 2019 in Vienna, Austria <https://hscr19.kfs.oeaw.ac.at>
CALL FOR PAPERS The aim of this workshop is to bring together researchers that are interested in historical aspects of all areas of speech communication research (SCR) with a focus on the interdisciplinary nature of the different fields of research. A special interest of the 2019 workshop is on the relation between science and technology that can be exemplified through the history of SCR ? including methods from the 20th century that are no longer the state-of-the-art but of historical relevance. Interesting questions in this respect are: How can knowledge transfers between science and technology be exemplified by the history of SCR? What is the relation between SCR and artistic practices? How was speech communication research influenced by the medical sciences? Like the past HSCR workshops that were held in 2015 in Dresden and 2017 in Helsinki this workshop is a satellite event of the INTERSPEECH conference which will be held in Graz, Austria <https://www.interspeech2019.org/>, a meeting that advertises with the Austro-Hungarian speech communication pioneer Wolfgang von Kempelen. Invited speaker is Peter Donhauser from the Institute for Media Archeology in Vienna.
Important Dates: Full paper submission: May 24, 2019 Notification of acceptance: July 1, 2019 Camera-ready paper submission: July 19, 2019 Workshop: September 13-14, 2019
The proceedings will be published in the book series 'Studientexte zur Sprachkommunikation' at TUDpress (Technical University Dresden). Workshop organisation: Michael Pucher, Acoustics Research Institute, Vienna, Austria Juergen Trouvain, Saarland University, Saarbruecken, Germany Carina Lozo, Acoustics Research Institute, Vienna, Austria
| |||||
3-2-5 | (2019-09-20) SSW10 - The 10th ISCA Speech Synthesis Workshop, Vienna, Austria Call for Papers
SSW10 - The 10th ISCA Speech Synthesis Workshop
20-22 September 2019
Vienna, Austria
The 10th ISCA Speech Synthesis Workshop will be held in Vienna, Austria, 20-22 September 2019. The workshop is a satellite event of the INTERSPEECH 2019 conference in Graz, Austria.
Confirmed invited speakers
Aäron van den Oord (Google DeepMind, UK)
Claire Gardent (CNRS, France)
Workshop topics
Papers in all areas of speech synthesis technology are encouraged to be submitted, including but not limited to:
Grapheme-to-phoneme conversion for synthesis
Text processing for speech synthesis (text normalization, syntactic and semantic analysis)
Segmental-level and/or concatenative synthesis
Signal processing/statistical model for synthesis
Speech synthesis paradigms and methods; articulatory synthesis, parametric synthesis etc.
Prosody modeling and generation
Expression, emotion and personality generation
Voice conversion and modification, morphing
Concept-to-speech conversion speech synthesis in dialog systems
Avatars and talking faces
Cross-lingual and multilingual aspects for synthesis
Applications of synthesis technologies to communication disorders
TTS for embedded devices and computational issues
Tools and data for speech synthesis
Quality assessment/evaluation metrics in synthesis
Singing synthesis
Synthesis of non-human vocalisations
End-to-end text-to-speech synthesis
Direct speech waveform modelling and generation
Speech synthesis using non-ideal data (?found?, user-contributed, etc.)
Natural language generation for speech synthesis
Special topic: Synthesis of non-standard language varieties (sociolects, dialects, second language varieties)
Call for Demos
We are planning to have a demo session to showcase new developments in speech synthesis. If you have some demonstrations of your work that does not really fit in a regular oral or poster presentation, please let us know.
The workshop program will consist of a single track with invited talks, oral and poster presentations. Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references. Papers can be submitted via the website http://ssw10.oeaw.ac.at.
Important dates:
Deadline for paper submission: May 10th, 2019
Final deadline for paper submission: May 17th, 2019
Notification of acceptance: July 1st, 2019
Camera-ready final versions: July 19th, 2019
Workshop: 20-22 September 2019
Blizzard Challenge Workshop 2019: 23. September
We are looking forward to seeing you in Vienna.
Sincerely,
The SSW organising committee (Michael Pucher, Junichi Yamagishi, Sebastian Le Maguer, Christian Kaseß, Friedrich Neubarth)
| |||||
3-2-6 | (2019-09-20) The 8th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2019), Graz, Austria Event: The 8th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2019) Location: Graz, Austria Dates: September 20 - 21, 2019 Website: https://sites.google.com/view/slate2019
|
3-3-1 | (2019-06-14) CfS The How2 Challenge - New Tasks for Vision and LanguageCall for SubmissionsThe How2 Challenge - New Tasks for Vision and LanguageResearch at the intersection of vision and language has attracted an increasing amount of attention over the last ten years. Current topics include the study of multi-modal representations, translation between modalities, bootstrapping of labels from one modality into another, visually-grounded question answering, embodied question-answering, segmentation and storytelling, and grounding the meaning of language in visual data. Still, these tasks may not be sufficient to fully exploit the potential of vision and language data.
To support research in this area, we recently released the How2 data-set,1 containing 2000 hours of how-to instructional videos, with audio, subtitles, Brazilian Portuguese translations, and textual summaries, making it an ideal resource to bring together researchers working on different aspects of multimodal learning. We hope that a common dataset will facilitate comparisons of tools and algorithms, and foster collaboration. We are organizing a workshop, “The How2 Challenge - New Tasks for Vision and Language” at ICML 2019, to bring together researchers and foster the exchange of ideas in this area. We seek submissions in the following two categories:
The organizers encourage both the publication of novel work that is relevant to the topics of discussion, and late-breaking results on the How2 tasks in a single format. The workshop will also feature a number of invited talks, and a moderated discussion around the challenges and opportunities that current tasks in vision and language present. We aim to stimulate discussion around new tasks that go beyond image captioning and visual question answering, and which could form the basis for future research in this area. We seek to create a venue to encourage collaboration between different sub-fields and help establish new research directions that we believe will sustain multimodal machine learning research for years to come. The How2 Challenge uses the How2 Corpus (https://srvk.github.io/how2-dataset/) Invited speakers:
Important dates: Challenge starts: March 15, 2019 Paper submission: May 15, 2019 Notification: May 22, 2019
For more information, visit https://srvk.github.io/how2-challenge/
Contact us at how2challenge@gmail.com.
| |||||||||
3-3-2 | (2019-06-20) Journée d'études: Matérialités vocales: voix, genre, medias, Université Jean Jaurès, Toulouse, France 'A? travers cette journe?e d?e?tude, nous souhaitons ouvrir un espace contact: voixetgenre@gmail.com
| |||||||||
3-3-3 | (2019-06-20) Workshop COFLIS, Bari, Italy Dear SProSIG member,
Here are details of a workshop on 'Prominence between Cognitive Functions and Linguistic Structures“ (COFLIS, http://ifl.phil-fak.uni-koeln.de/coflis.html ) that might be of interest to you. The workshop is a satellite of the Phonetics and Phonology in Europe (PaPE) 2019 Conference and will be held immediately after PaPE on 20th June 2019 in Bari, a two-hour train ride from the venue of the main conference (Lecce). Abstract submission deadline for posters: 15th March 2019.
Description:
In the workshop, we capitalise on several decades of research on prosodic prominence to unravel the key components of the notion of prominence. By exploring the contribution of the signal, of meaning and of linguistic structure to the definition of prominence, and by relating prominence to basic cognitive concepts such as chunking and attention, we aim to provide a renewed understanding of prominence. The workshop will feature four invited talks, covering the measurable, structural and functional components of prominence. Rather than share new experimental evidence, invited speakers will be asked to focus on the theoretical implications of their use of prominence in their research. Invited talks will be complemented by regularly submitted and peer-reviewed submissions for poster presentations on prominence in phonetics and phonology. Submissions emphasising the challenges in defining and using the notion of prominence will be particularly welcome.
Looking forward to seeing you there!
Francesco Cangemi, Stefan Baumann, Michelina Savino, Martine Grice
(Sent via to the sprosig list. To subscribe/unsubscribe, please mail list@sprosig.org. Alternative contact: Nigel Ward, Speech Prosody SIG Chair, Professor of Computer Science, University of Texas at El Paso, +1-915-747-6827, nigel@utep.edu http://www.cs.utep.edu/nigel/ )
| |||||||||
3-3-4 | (2019-07-01) 2019 42nd International Conference on Telecommunications and Signal Processing (TSP), Budapest, Hungary 2019 42nd International Conference on Telecommunications and Signal Processing (TSP)
| |||||||||
3-3-5 | (2019-07-01) HackaTAL 2019 (hackathon en traitement automatique des langues), Toulouse, France Grand Débat // Legal Tech
HackaTAL 2019 (hackathon en traitement automatique des langues) avec la conférence TALN/PFIA 2019 Résumé
Tâches : analyses du grand débat / chatbots juridiques Site web : http://hackatal.github.io/2019 Dates : 1 et 2 juillet 2019 Lieu : Université Toulouse 1 Capitole Inscription (gratuit pour les étudiants et doctorants) : https://forms.gle/eNo8rogN2fWE3xedA Fil twitter : https://twitter.com/hashtag/HackaTAL2019 Le HackaTAL
Dans le cadre de la conférence TALN 2019 au sein de la plateforme PFIA, nous organisons la 4ème édition du hackathon en traitement automatique des langues, le HackaTAL 2019. L?objectif est de réunir les communautés scientifiques TAL, IA, et très largement au delà, autour de défis à relever pour questionner, interroger, modéliser, prototyper, coder, expérimenter, développer, tester, évaluer, échanger, etc. par équipes, dans une ambiance dynamique et sympathique :) Les tâches proposées portent cette année sur deux thématiques (détails ci-dessous) : - analyses des contributions au grand débat national,
- conception de chatbots pour le domaine juridique.
L?événement aura lieu cette année avec PFIA (https://www.irit.fr/pfia2019), à l?université Toulouse 1 Capitole, les 1 et 2 juillet 2019. Il est très largement ouvert à tous : juniors et seniors, informaticiens, linguistes, politologues, juristes, sociologues, etc. et ne nécessite aucune préparation particulière ni de compétences spécifiques? toute personne intéressée est bienvenue pour apporter sa contribution aux travaux collaboratifs (par équipes) que nous réaliserons sur ces deux jours ! Défis proposés
1. Analyses du grand débat national Le « grand débat national » instauré par le gouvernement début 2019 s?est matérialisée par la mise en place de dispositifs contributifs (sites, réunions, cahiers de doléances), par lesquels les citoyens peuvent donner leurs avis en réponse à des questions, et/ou selon des thématiques. L?ouverture des données produites a donné lieu à la constitution d?un corpus volumineux, sur lequel des analyses peuvent être conduites, en particulier par utilisation d?outils de traitement de la langue ou d?analyse du discours. Les tâches sont très ouvertes, nous proposons en particulier une orientation sur la génération et la nature argumentative des contributions. Tâches - quantifier, analyser et visualiser les contributions au grand débat national - proposer des analyses sémantiques ou discursives des contributions - repérer et extraire des arguments structurés dans les contributions du grand débat - générer des contributions ou des arguments (à préciser) - générer une proposition (phrase) à partir d?indices (mots, thème, opinion) - générer une contribution argumentative à partir d?un ensemble de propositions - générer un résumé à partir d?un ensemble de contributions - générer une contribution avec une contrainte de style (« à la manière de ») Ressources - Jeux de données - https://www.data.gouv.fr/fr/datasets/donnees-ouvertes-du-grand-debat-national - https://granddebat.fr/pages/donnees-ouvertes - Autres sites contributifs (à compléter) - Vrai Débat : https://le-vrai-debat.fr - Entendre la France : https://www.entendrelafrance.fr - API GraphQL: https://granddebat.fr/developer - Analyses existantes - Observatoire des débats (GIS Démocratie et Participation, ICPC, CEVIPOF) https://observdebats.hypotheses.org - Cartographies : Cartolabe (INRIA, Paris-Saclay, CNRS) https://cartolabe-dev.lri.fr/map/debatt et Politoscope (CNRS) https://politoscope.org/le-politoscope - Annotations collaboratives du grand débat: https://grandeannotation.fr et https://github.com/fm89/granddebat - Projet Grande Lecture (bulles de filtre) : cartes de 100 contributions par circonscription http://www.grande-lecture.fr - Witted http://gdn.witted.tech - Democratie.app https://www.democratie.app - Grand Débat et TAL (Vincent Claveau) http://people.irisa.fr/Vincent.Claveau/GrandDebat et (Damien Nouvel) http://damien.nouvels.net/fr/debats2019 - Gilets Jaunes (LERASS) https://www.lerass.com/wp-content/uploads/2019/02/GJ-V3.pdf 2. Chatbots juridiques Depuis quelques années, la mise en place d?agents conversationnels (chatbots) par de nombreuses entreprises est une tendance de fond (et déjà sujet du HackaTAL en 2016, https://hackatal.github.io/2016). En parallèle, les outils numériques et technologiques sont toujours plus utilisés dans le domaine juridique (LegalTech). Ces deux évolutions technologiques permettent aujourd?hui d?envisager le développement d?agents répondant à des questions sur des problématiques juridiques. Les tâches proposées visent le prototypage, voire la mise en place (démos) de telles infrastructures de dialogue à partir de ressources, soit pour des problématiques liées à la vie courante des citoyens (recherche d?informations juridiques) ou dans un contexte d?analyse de contrats pour des entreprises. Tâches - agent qui aide à la recherche d?informations juridiques - détermination du domaine du litige - proposition de références (liens) vers des textes de loi pertinents - recherche de la procédure adéquate pour la résolution du problème - agent qui répond à des questions sur des contrats juridiques - dates de début / fin de contrat - entités juridiques mentionnées (sociétés, entités administratives) - repérage des parties du contrat présentant des risques, incohérences ou anomalies Ressources - Droit du numérique : http://www.adij.fr/code-activites-du-numerique-contributions - Droits quotidiens : les fiches en langage juridique clair de https://www.droitsquotidiens.fr/fr et https://www.droitsquotidiens.be/fr - Module de création d?assistant juridique (Seraphin.legal) https://www.legaltech.store/categoriesproduits/legal-bots - Technologies du réseau Legal Tech Lawyer disponibles pendant le hackathon https://www.legaltech.store - Projet de chatbot juridique : https://leeally.com/fr - Données et contenus juridiques https://www.data.gouv.fr Prix Des prix seront décernés aux meilleures équipes (vote des participants et organisateurs) Planning prévisionnel
Lundi 1er juillet - 13h-14h : accueil et café - 14h-15h : introduction, présentation du hackathon - 15h-18h : développements en équipes - 18h-19h : présentations invitées - 19h- : cocktail, buffet, développements en équipes Mardi 2 juillet - 09h-12h : accueil, café, développements en équipes - 12h-14h : déjeuner et café - 14h-16h : développements en équipes - 16h-18h : présentation des résultats par équipe - 18h-19h : vote, remise des prix, conclusion Organisation pratique BYOD (amenez votre ordinateur) Pas de critères pour participer, le hackathon est ouvert à tous ! Aucune préparation requise des participants Logiciels et données en ligne : https://github.com/HackaTAL/2019 Organisateurs Julien Aligon (IRIT) Sébastien Beghelli (SAP) Manon Cassier (AGORA) Chloé Clavel (Télécom ParisTech) Kevin Deturck (Viseo / ERTIM) Nicolas Dugué (LIUM) Maud Gilet (Seraphin.legal) Gibran Freitas (Seraphin.legal) Loïc Grobol (Lattice) Didier Ketels (Droits Quotidiens) Charles Leconte (Seraphin.legal) Hugues de Mazancourt (YSEOP) Damien Nouvel (ERTIM) Camille Pradel (Synapse) Paul Renvoise (SAP) Thomas Saint-Aubin (Seraphin.legal) Raphaël Troncy (EURECOM) Guillaume Wisniewski (LIMSI)
| |||||||||
3-3-6 | (2019-07-01?) Atelier Enseignement des langues et TAL -ELTAL 2019 (TALN-RECITAL 2019), Toulouse, France *APPEL A CONTRIBUTION : Atelier Enseignement des langues et TAL
| |||||||||
3-3-7 | (2019-07-02) 2nd Call for Papers ? ACM Intelligent Virtual Agents Conference- IVA2019 , Paris, France 2nd Call for Papers ? ACM Intelligent Virtual Agents Conference - IVA 2019 2-5 July 2019, Paris, France
https://iva2019.sciencesconf.org
The 19th ACM International Conference on Intelligent Virtual Agents (IVA) will be held on July 2-5 2019 in Paris, France. The conference is organized by CNRS, Sorbonne University and Paris-Saclay University (France), and sponsored by ACM-SIGAI. The IVA conference started in 1998 as a workshop on Intelligent Virtual Environments at the European Conference on Artificial Intelligence in Brighton, UK, which was followed by a similar one in 1999 in Salford, Manchester, UK. Then dedicated stand-alone IVA conferences took place in Madrid, Spain, in 2001, Irsee, Germany, in 2003, and Kos, Greece, in 2005. Since 2006 IVA has become a full-fledged annual international event, which was first held in Marina del Rey, California, then Paris, France, in 2007, Tokyo, Japan, in 2008, Amsterdam, The Netherlands, in 2009, Philadelphia, Pennsylvania, USA, in 2010, Reykjavik, Iceland, in 2011, Santa Cruz, USA, in 2012, Edinburgh, UK, in 2013, Boston, USA, in 2014, Delft, The Netherlands, 2015, Los Angeles, USA, 2016, Stockholm, Sweden, 2017. IVA 2018 was held in Sydney, Australia.
PAPER SUBMISSION We invite submissions of research full papers on a broad range of topics, including but not limited to: theoretical foundations of virtual agents, agent modeling and evaluation, agents in games and simulations, and applications of virtual agents. Extended abstracts presenting late breaking work are also welcome. IVA 2019 is the 19th meeting of an interdisciplinary annual conference and the main leading scientific forum for presenting research on modeling, developing and evaluating Intelligent Virtual Agents (IVAs) with a focus on communicative abilities and social behavior. IVAs are interactive digital characters that exhibit human-like qualities and can communicate with humans and each other using natural human modalities like facial expressions, speech and gesture. They are capable of real-time perception, cognition, emotion and action that allow them to participate in dynamic social environments. In addition to presentations on theoretical issues, the conference encourages the showcasing of working applications.
IVA 2019?s special topic is ?Social Learning?, that is learning while interaction socially; agents can learn from the humans and humans can learn from the agents. Agents can take different roles such as tutors, peers, motivators, coaches in training and in serious games. They can act as job recruiter, virtual patient, and nurse to name a few applications. With this topic in mind we are seeking closer engagement with industry and also with social psychologists.
For more information, please visit the IVA 2019 website: https://iva2019.sciencesconf.org
The papers and extended abstracts will be published in the ACM digital library. All submissions will be reviewed via a double-blind review process.
IMPORTANT DATES (23h59 UTC/GMT) Full papers Submission Deadline: March 1, 2019 Notification of Acceptance: April 8, 2019 Camera Ready: April 22, 2019 Extended abstracts Submission Deadline: March 1, 2019 Notification of Acceptance: April 8, 2019 Camera Ready: April 22, 2019
INVITED SPEAKERS Beatrice de Gelder (Maastricht University) Rachael Jack (Glasgow University) Verena Rieser (Heriot-Watt University) Pierre-Yves Oudeyer (INRIA - Bordeaux)
COMMITTEE Conference Chairs Catherine Pelachaud, CNRS-ISIR, Sorbonne University, France Jean-Claude Martin, CNRS-LIMSI, University Paris Saclay, France
Gale Lucas, USC Institute for Creative Technologies, USA Hendrik Buschmeier, Bielefeld University, Germany Stefan Kopp, Bielefeld University, Germany
SCOPE AND LIST OF TOPICS IVA invites submissions on a broad range of topics, including but not limited to:
List of TopicsSocio-emotional agent models:
Multimodal interaction:
Social agent architectures:
Evaluation methods and studies:
Applications:
Social learning:
WARNING: There is a conference called ICIVA 2019 that claims to be the 21st International Conference on Intelligent Virtual Agents in Bali in October 2019. This conference is not the official IVA and is launched by an organization World Academy of Science, Engineering and Technology that is unfortunately very famous for its predatory publishing practices. (https://en.wikipedia.org/wiki/World_Academy_of_Science,_Engineering_and_Technology) Please note that no paper submitted to ICIVA 2019 in Bali will be published in the IVA 2019 proceedings.
| |||||||||
3-3-8 | (2019-07-08) eNTERFACE 2019, Ankara, Turkey Call for Participation | eNTERFACE 2019
Bilkent University, Ankara, Turkey July 8th ? August 2nd, 2019
- - - - - - - - - - - - - - - - - - - -
The eNTERFACE 2019 Workshop is being organized this summer in Ankara, Turkey, from July 8th to August 2nd, 2019. The Workshop will be held at Bilkent University (http://www.bilkent.edu.tr).
The eNTERFACE Workshops present an opportunity of collaborative research and software development by gathering, in a single place, a team of senior project leaders in multimodal interfaces, PhD students, and (undergraduate) students, to work on a pre-specified list of challenges, for the duration of four weeks. Participants are organized in teams, assigned to specific projects. The ultimate goal is to make this event a unique opportunity for students and experts all over the world to meet and effectively work together, so as to foster the development of tomorrow?s multimodal research community.
Senior researchers, PhD, or undergraduate students interested in participating to the Workshop should send their application by emailing the Organizing Committee at enterface19@cs.bilkent.edu.tr on or before April 8th, 2019. The application should contain:
- A short CV
- A list of three preferred projects to work on
- A list of skills to offer for these projects.
Participants must procure their own travel and accommodation expenses. Information about the venue location and stay are provided on the eNTERFACE?19 website (http://www.enterface19.bilkent.edu.tr). Note that although no scholarships are available for PhD students, there are no application fees.
eNTERFACE'19 will welcome students, researchers, and seniors, working in teams on the following projects
#1 A Multimodal Behaviour Analysis Tool for Board Game Interventions with Children
#2 Cozmo4Resto: A Practical AI Application for Human-Robot Interaction
#3 Developing a Scenario-Based Video Game Generation Framework for Virtual Reality and Mixed Reality Environments
#4 Exploring Interfaces and Interactions for Graph-based Architectural Modelling in VR
#5 Spatio-temporal and Multimodal Analysis of Personality Traits
#6 Stress and Performance Related Multi-modal Data Collection, Feature Extraction and Classification in an Interview Setting
#7 Volleyball Action Modelling for Behaviour Analysis and Interactive Multi-modal Feedback
The full detailed description of the projects is available at http://www.enterface19.bilkent.edu.tr/call-for-participation/
Best Regards,
Hamdi & Elif
-- Dr. Hamdi Dibeklioglu
Assistant Professor
Department of Computer Engineering Bilkent University 06800 Ankara, Turkey
| |||||||||
3-3-9 | (2019-07-08) The 12th Annual International Conference on Languages & Linguistics, Athens, Greece The 12th Annual International Conference on Languages & Linguistics is organanized on 8-11 July 2019, Athens, Greece (Academic Responsible: Dr. Valia Spiliotopoulos, Associate Professor of Professional Practice & Academic Director Centre for English Language Learning, Teaching, and Research (CELLTR), Faculty of Education, Simon Fraser University, Canada). You are more than welcome to submit a proposal for presentation. The abstract submission deadline is 11 March 2019. You may also send us a stream-panel proposal to be organized as part of the conference. https://euagenda.eu/events/2019/07/08/12th-annual-international-conference-on-languages-linguistics-811-july-2019-athens-greece
| |||||||||
3-3-10 | (2019-07-21) The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019), Paris, France
The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval July 21-25, 2019, Paris, France CALL FOR PAPERS Call for Full Papers The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR). The 42nd ACM SIGIR conference, to be held in Paris, France, welcomes contributions related to any aspect of information retrieval and access, including theories, foundations, algorithms, applications, evaluation, and analysis. The conference and program chairs invite those working in areas related to IR to submit original papers for review. Important Dates (timezone: anywhere on earth)
CommitteesProgram chairs
General chairs
ContactAll questions about full paper submissions should be emailed to sigir2019-pcchairs AT easychair DOT org. follow us on twitter : @sigir2019 follow us on our web site : http://sigir.org/sigir2019/
| |||||||||
3-3-11 | (2019-07-21) The Apollo-11 speech challenge HISTORY: On July 20, 1969 at 20:17 UTC, Earth witnessed one of the most challenging technology accomplishments by mankind to date of NASA Apollo-11 with over 600M people witnessing both the landing and first steps on the moon by Neil Armstrong and Buzz Aldrin. Next July 2019 marks the 50th Anniversary of the historical Apollo-11 lunar landing and first steps. https://en.wikipedia.org/wiki/Apollo_11 NSF CRSS-UTDallas Project: With support from the US National Science Foundation (NSF-CISE), CRSS-UTDallas has spent the last six years developing a hardware/software solution to digitize and recover all 30-track analog tapes from Apollo-11 (plus Apollo-13 and other missions), as well as development of speech diarization technologies to advance speech technology for such data. A total of 19,000 hours of data consisting of all NASA air-to-ground, mission control, and backroom support team discussions was released this year (news releases from this NSF sponsored project this year include: NSF, NASA, BBC, AIP (Acoustical Society of America), NPR, many on-line news sites, and involvement in a planned CNN documentary where this data is contributing, etc.). To date, this will be the largest publically available audio corpus of time synchronized, team based (~600 people) naturalistic communications to accomplish a real-world task. ANNOUNCEMNT: This email is to announce the release of the FEARLESS STEPS CHALLENGE corpus, which is being shared for a proposed Special Session at ISCA INTERSPEECH-2019. The attached flyer details the 5 challenge tasks involved: 1. SAD: Speech Activity Detection 2. Speaker Diarization 3. SID: Speaker Identification 4. ASR: Automatic Speech Recognition 5. Sentiment Detection
This challenge corpus consists of 100hours from 5 of the 30-track channels spanning three phases of Apollo-11 mission: (i) lift off, (ii) landing, (iii) lunar walk. All data for this challenge will be available soon via a download option for all to participate (this site has sample audio from the NSF funded project: https://app.exploreapollo.org/ ). In addition, any lab/group wishing to have access to the entire 19,000 hours can do so without charge (this is public data, so it will be available via download, or a small fee for a hard-disk and shipping to your lab). While diarization efforts in the past have concentrated on single channel broadcast news, interviews, etc. These all represent typically a single speaker, or small group discussing topics of interest. The FEARLESS STEPS CORPUS is all time synchronized (with IRIG Time Channel) across 30 channels, with loops containing anywhere from 3-33 speakers working collaboratively to solve challenging problems. CRSS-UTDallas has produced full diarization output for the entire 19,000hrs of data (SAD, SID, DIAR/ASR) which is available with the corpus. REQUEST: We are proposing a Special Session at ISCA INTERSPEECH-2019. If you have interest in getting access to the FEARLESS STEPS CORPUS and potentially participating in the CHALLENGE, please reply to this email Hansen, John' <john.hansen@utdallas.edu>(an expression of interest does not obligate you to submit, we are simply trying to collect a list of interested researchers for the data). Many thanks for your interest! CRSS-UTDallas Fearless Steps Team
| |||||||||
3-3-12 | (2019-07-22) 3rd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING, Warsaw, Poland 3rd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING
| |||||||||
3-3-13 | (2019-07-25) CfP The 1st Workshop on Conversational Interaction Systems (WCIS), Paris, FranceThe 1st Workshop on Conversational Interaction Systems (WCIS)
=== Call for Papers === You are invited to participate in the 1st Workshop on Conversational Interaction Systems (WCIS), to be held as part of 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019) in Paris, France on 25th July 2019. === Important Dates === - Submissions due: May 3, 2019 - Paper Notification: May 31, 2019 - Camera Ready Papers due: June 30, 2019 - Workshop Day: July 25, 2019
=== Aim of the Workshop === Conversational interaction systems such as Alexa, Google Assistant, Siri, and Cortana have become very popular over the recent years. Such systems provide a conversational interface to a wide variety of content and on the web and in turn for IR systems. Some interactive systems like Facebook Portal and Echo Show also involve challenges with language understanding in combination with vision. Research challenges such as Dialogue System Technology Challenges and Amazon Alexa Prize have continued to inspire research in conversational AI bringing together researchers from different communities such as speech recognition, spoken language understanding, reinforcement learning, information retrieval, language generation, and multi-modal question answering.
This workshop aims to bring together researchers from academia and industry to discuss the challenges and future of conversational agents and interactive systems. We will highlight applications like recommendation systems, search, knowledge graph induction, multi-modal interaction and web question answering. The workshop will include talks from senior technical leaders and researchers to share insights associated with building conversational systems at scale. We will issue an open call for papers and will prioritize innovative and impactful contributions. Accepted papers will be presented through contributed talks or poster presentations. We will end the workshop with an open panel discussion consisting of leading researchers.
=== Workshop Topics === We invite contributions in the following areas:
=== Submission Information === The submissions of research papers must be in PDF format and must be at most 6 pages (including figures, excluding references). The submissions should follow the current ACM two-column conference format. The templates are available on the ACM website (https://www.acm.org/publications/proceedings-template). Please note that our workshop is non-archival, but accepted submissions will be hosted on the workshop website. Submissions can be optionally anonymous (posting on arXiv is allowed) and should be submitted electronically via the conference submission system by the due date.
Workshop Website: https://sites.google.com/view/wcis Submission Link : https://easychair.org/conferences/?conf=wcis2019 Templates : https://www.acm.org/publications/proceedings-template
=== Organizing Committee === - Chandra Khatri, Uber AI - Rahul Goel, Amazon Alexa AI - Abhinav Rastogi, Google AI - Alexandros Papangelis, Uber AI
Advisory Committee - Dilek Hakkani-Tur (Amazon Alexa AI), Rushin Shah (Facebook Conversational AI), Gokhan Tur (Uber AI), Zhou Yu (UC Davis), Arindam Mandal (Amazon Alexa AI), Jan Sedivy (Czech Technical University), Panagiotis Papadakos (FORTH -ICS), Raefer Gabriel (Amazon Alexa AI)
Program Committee - Semih Yavuz (University of California, Santa Barbara), Alessandra Cervone (University of Trento), Angeliki Metanillou (Amazon Alexa AI), Bhenam Hedayatnia (Amazon Alexa AI), Sanghyun Yi (Caltech), Huaixiu Zheng (Uber AI), Marco Damonte (University of Edinburgh), Tagyoung Chung (Amazon Alexa AI), Raghav Gupta (Google AI), Tanmay Rajpurohit (Genpact AI), Dian Yu (UC Davis), Pararth Shah (Facebook AI) Thanks! Abhinav, Chandra, Rahul, Alex
| |||||||||
3-3-14 | (2019-08-04) International Conference on Phonetic Sciences, Melbourne, AustraliaDon't miss your opportunity to be a part of ICPhS 2019!
|
Back | Top |
Presentation
http://lig-getalp.imag.fr/icphs-2019-special-session/
The special session Computational Approaches for Documenting and Analyzing Oral Languages welcomes submissions presenting innovative speech data collection methods and/or assistance for linguists and communities of speakers: methods and tools that facilitate collection, transcription and translation of primary language data. Oral languages is understood here as referring to spoken vernacular languages which depend on oral transmission, including endangered languages and (typically low-prestige) regional varieties of major languages.
The special session intends to provide up-to-date information to an audience of phoneticians about developments in machine learning that make it increasingly feasible to automate segmentation, alignment or labelling of audio recordings, even in less-documented languages. A methodological goal is to help establish the field of Computational Language Documentation and contribute to its close association with the phonetic sciences. Computational Language Documentation needs to build on the insights gained through phonetic research; conversely, research in phonetics stands to gain much from the availability of abundant and reliable data on a wider range of languages.
Laurent Besacier ? LIG UGA (France)
Alexis Michaud ? LACITO CNRS (France)
Martine Adda-Decker ? LPP CNRS (France)
Gilles Adda ? LIMSI CNRS (France)
Steven Bird ? CDU (Australia)
Graham Neubig ? CMU (USA)
François Pellegrino ? DDL CNRS (France)
Sakriani Sakti ? NAIST (Japan)
Mark Van de Velde ? LLACAN CNRS (France)
This special session is endorsed by SIGUL (Joint ELRA and ISCA Special Interest Group on Under-resourced Languages)
Back | Top |
Back | Top |
CfP 21st International Conference on Speech and Computer (SPECOM-2019), Istanbul, Turkey
*********************************************************
SPECOM-2019 – SECOND CALL FOR PAPERS
*********************************************************
21st International Conference on Speech and Computer (SPECOM-2019)
Venue: Istanbul, Turkey, August 20-25, 2019
ORGANIZERS
The conference is organized by Bogazici University (BU, Istanbul, Turkey) in cooperation with St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia) and Moscow State Linguistic University (MSLU, Moscow, Russia).
SPECOM-2019 CO-CHAIRS
Albert Ali Salah - Bogazici University, Turkey / Utrecht University, the Netherlands
Alexey Karpov - SPIIRAS, Russia
Rodmonga Potapova - MSLU, Russia
INVITED SPEAKERS
Hynek Hermansky - Johns Hopkins University, USA - 'If You Can’t Beat Them, Join Them'
Odette Scharenborg - Delft University of Technology, the Netherlands - 'The representation of speech in the human and artificial brain'
Vanessa Evers - University of Twente, the Netherlands - 'Socially intelligent robotics'
CONFERENCE TOPICS
SPECOM conference is dedicated to issues of speech technology, human-machine interaction, machine learning and signal processing, particularly:
Affective computing
Applications for human-computer interaction
Audio-visual speech processing
Automatic language identification
Computational paralinguistics
Corpus linguistics and linguistic processing
Deep learning for sound and speech processing
Forensic speech investigations and security systems
Multichannel signal processing
Multimedia processing
Multimodal analysis and synthesis
Signal processing and feature extraction
Speaker identification and diarization
Speaker verification systems
Speech and language resources
Speech analytics and audio mining
Speech dereverberation
Speech driving systems in robotics
Speech enhancement
Speech perception and speech disorders
Speech recognition and understanding
Speech translation automatic systems
Spoken dialogue systems
Spoken language processing
Text-to-speech and Speech-to-text systems
Virtual and augmented reality
SATELLITE EVENT
4th International Conference on Interactive Collaborative Robotics ICR-2019: http://www.specom.nw.ru/icr2019
OFFICIAL LANGUAGE
The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.
FORMAT OF THE CONFERENCE
The conference program will include presentation of invited talks, oral presentations, and poster/demonstration sessions.
SUBMISSION OF PAPERS
Authors are invited to submit a full paper not exceeding 10 pages formatted in the LNCS style. Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of several independent reviewers. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2019
Papers submitted to SPECOM-2019 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.
PROCEEDINGS
SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major citation databases such as Web of Science, Scopus, DBLP, etc. SPECOM Proceedings are included in the list of forthcoming proceedings for August 2019.
IMPORTANT DATES
April 22, 2019 ............ Submission of full papers (extended deadline)
May 22, 2019 ............ Notification of acceptance (extended)
June 01, 2019 ............ Camera-ready papers and early registration
Aug. 20-25, 2019 ......... Conference dates
VENUE
The conference will be organized at the Bogazici University, South campus, Albert Long Hall.
CONTACTS
All correspondence regarding the conference should be addressed to:
SPECOM-2019 Secretariat:
E-mails: specom@iias.spb.su; salah@boun.edu.tr
SPECOM-2019 web-site: http://www.specom.nw.ru
Back | Top |
2019 Jelinek Summer Workshop on Speech and Language Technology
We are pleased to invite one page research proposals for a workshop on Machine Learning for Speech and Language Technology at ÉTS (École de Technologie Supérieure) in Montreal, CA June 24 to August 2, 2019 (Tentative)
CALL FOR PROPOSALS Deadline: Monday, November 5th, 2018.
One-page proposals are invited for the annual Frederick Jelinek Memorial Workshop in Speech and Language Technology. Proposals should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) or related areas of Machine Intelligence, including Computer Vision and Healthcare. Proposals may address emerging topics or long-standing problems. Areas of interest in 2019 include but are not limited to: * SPEECH TECHNOLOGY: Any aspect of information extraction from speech signals; techniques that generalize in spite of very limited amounts of training data and/or which are robust to input signal variations ; techniques for processing of speech in harsh environments, etc.
* NATURAL LANGUAGE PROCESSING: Knowledge discovery from text; new approaches to traditional problems such as syntactic/semantic/pragmatic analysis, machine translation, cross - language information retrieval, summarization, etc.; domain adaptation; integrated language and social analysis; etc.
* MULTIMODAL HLT: Joint models of text or speech with sensory data; grounded language learning; applications such as visual question - a nswering, video summarization, sign language technology, multimedia retrieval, analysis of printed or handwritten text. * DIALOG AND LANGUAGE UNDERSTANDING: U n d e r s t a n din g h u m a n - to - h u m a n o r h u m a n - to - computer conversation; dialog manag ement; naturalness of dialog (e.g. sentiment analysis).
* LANGUAGE AND HEALTHCARE: information extraction from electronic health records; speech and language technology in health monitoring; healthcare delivery in hospitals or the home, public health, etc.
These workshops are a continuation of the Johns Hopkins University CLSP summer workshop series, and will be hosted by various partner universities on a rotating basis. The research topics selected for investigation by teams in past workshops should serve as good examples for prospective proposers: http://www.clsp.jhu.edu/workshops/. An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated by November 9th, 2018. Authors passing this initial screening will be invited to an interactive peer-review meeting in Baltimore on December 7-9th, 2018. Proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected at this meeting for the 2019 workshop. We attempt to bring the best researchers to the workshop to collaboratively pursue research on the selected topics. Each topic brings together a diverse team of researchers and students. Authors of successful proposals typically lead these teams. Other senior participants come from academia, industry and government. Graduate student participants familiar with the field are selected in accordance with their demonstrated performance. Undergraduate participants, selected through a national search, are rising star seniors: new to the field and showing outstanding academic promise. If you are interested in participating in the 2019 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed. If a topic in your area of interest
is chosen as one of the topics to be pursued next summer, we expect you to be available to participate in the six - week workshop . We are not asking for an ironclad commitment at this juncture, just a good faith commitment that if a project in your area of interest is chosen, you will actively pursue it. We in turn will make a good faith effort to accommodate any personal/logistical needs to make your six-week participation possible.
Proposals must be submitted to jsalt2019-planning@jhu.edu by 23:59PM EDT on Monday, 11/05/2018.
Back | Top |
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
EUSIPCO 2019
27th European Signal Processing Conference
A Coruña, Spain
September 2-6, 2019
www.eusipco2019.org
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
IMPORTANT DATES
- Satellite Workshop Proposals ? February 4, 2019
- Full Paper Submission ? February 18, 2019
- Notification of Acceptance ? May 17, 2019
- Final Manuscript Submission - May 31, 2019
The 2019 European Signal Processing Conference (EUSIPCO) will be held in the charming
city of A Coruña, Spain, from September 2 to September 6, 2019. This flagship conference
of the European Association for Signal Processing (EURASIP) will feature a comprehensive
technical program addressing all the latest developments in research and technology for
signal processing. EUSIPCO 2019 will feature world-class speakers, oral and poster
sessions, plenaries, exhibitions, demonstrations, tutorials, and satellite workshops, and
is expected to attract many leading researchers and industry figures from all over the
world.
TECHNICAL SCOPE
We invite the submission of original, unpublished technical papers on topics including
but not limited to:
- Audio and acoustic signal processing
- Speech and language processing
- Image and video processing
- Multimedia signal processing
- Signal processing theory and methods
- Sensor array and multichannel signal processing
- Signal processing for communications
- Radar and sonar signal processing
- Signal processing over graphs and networks
- Nonlinear signal processing
- Statistical signal processing
- Compressed sensing and sparse modelling
- Optimization methods
- Machine learning
- Bio-medical image and signal processing
- Signal processing for computer vision and robotics
- Computational imaging /spectral imaging
- Information forensics and security
- Signal processing for power systems
- Signal processing for education
- Bioinformatics and genomics
- Signal processing for big data
- Signal processing for the internet of things
- Design/implementation of signal processing systems
ORGANIZING COMMITTEE
- General Co-Chairs: Mónica F. Bugallo, Stony Brook University, USA; Luis Castedo,
University of A Coruña, Spain
- Technical Program Chairs: Maria Sabrina Greco , University of Pisa, Italy; Marius
Pesavento, University of Darmstadt, Germany
- Publications Co-Chairs: Andrea Ferrari, University of Nice Sophia Antipolis, France;
Luca Martino, University Carlos III of Madrid, Spain
- Financial Chair: Ignacio Santamaría, University of Cantabria, Spain
- Special Sessions Co-Chairs: Markus Rupp, Vienna University of Technology, Austria;
Danilo Mandic, Imperial College London, UK
- Tutorials Co-Chairs: Aleksandar Dogand?i?, Iowa State University, USA; Mario A.T.
Figueiredo, University of Lisboa, Portugal
- Satellite Workshops Chair: Wolfgang Utschick, Technical University of Munich, Germany
- Students Activities Co-Chairs: Pau Closas, Northeastern University, USA; Jordi
Vilà-Valls, University of Toulouse / ISAE-SUPAERO, France
- Industrial Program Chair: Víctor Elvira, IMT Lille Douai, France
- Publicity Chair: Javier Vía, University of Cantabria, Spain
- International Liaisons: Ke Guan, Beijing Jiaotong University, China; Henry Argüello,
Industrial University of Santander, Colombia
- Local Chair: Roberto López-Valcarce, atlanTTic, University of Vigo, Spain
_______________________________________________
Announcements mailing list
https://lists.eurasip.org/mailman/listinfo/announcements
Back | Top |
*********** CALL FOR PAPERS ***********
Text Mining and Applications (TEMA?19) Track of EPIA?19
TeMA 2019 will be held at the 18th Portuguese Conference on Artificial
Intelligence (EPIA 2018) taking place at Vila Real, Portugal, from 3rd
to 6th September 2019. This track is organized under the auspices of
the Portuguese Association for Artificial Intelligence (APPIA).
EPIA 2019 URL http://www.epia2019.utad.pt/index.php/call-for-papers
This announcement contains the following information:
[1] Track description; [2] Topics of interest; [3] Important dates;
[4] Paper submission; [5] Track fees; [6] Organizing Committee; [7]
Program Committee and [8] Contacts.
[1] Track Description
The 8th edition of the Text Mining and Applications (TeMA 2019) track
will be a forum for researchers working in Human Language
Technologies, i.e. Natural Language Processing (NLP), Computational
Linguistics (CL), Natural Language Engineering (NLE), Text Mining
(TM), Information Retrieval (IR), and related areas.
The most natural form of sharing knowledge is indeed through textual
documents. Especially on the Web, a huge amount of textual information
is openly published every day, on many different topics and written in
natural language, thus offering new insights and many opportunities
for innovative applications of Human Language Technologies.
Following recent advances in general IA sub?fields such as Natural
Language Processing (NLP) and Machine Learning (ML), text mining is
now even more valuable as tool for bridging the gap between language
theories and effective use of natural language contents, for
harnessing the power of semi?structured and unstructured data, and to
enable important applications in real?world heterogeneous
environments. Both hidden and new knowledge can be discovered by using
text mining methods, at multiple levels and in multiple dimensions,
and often with high commercial value.
Authors are invited to submit their papers on any of the issues
identified in section [2]. Papers will be blindly reviewed by at least
three members of the Program Committee. All accepted papers will be
published by Springer in a volume of Springer?s Lecture Notes in
Artificial Intelligence (LNAI) corresponding to the proceedings of the
19th EPIA Conference on Artificial Intelligence, EPIA 2019.
[2] Topics of Interest
Topics include but are not limited to:
Text Mining Natural Language Processing, and Social Media Content Analysis
? Entity Recognition and Disambiguation
? Relation Extraction
? Analysis of Opinions, Emotions and Sentiments
? Text Clustering and Classification
? Machine Translation
? Summarization
? Word Sense Disambiguation
? Co?Reference Resolution
? Language Modeling
? Syntax and Parsing
? Distributional Models and Semantics
? Multi?Word Units
? Lexical Knowledge Acquisition
? Spatio?Temporal Text Mining
? Entailment and Paraphrases
? Natural Language Generation
? Language Resources: Acquisition and Usage
? Cross?Lingual Approaches
? Algorithms and Data Structures for Text Mining
Applications:
? Information Retrieval
? Information Extraction
? Question?Answering and Dialogue Systems
? Text?Based Prediction and Forecasting
? Web Content Annotation
? Computational Social Science
? Computational Journalism
? Health and Well?being
? Big Data Analysis
[3] Important dates
April 15, 2019: Paper submission deadline
May 31, 2019: Notification of paper acceptance
June 15, 2019: Deadline for camera?ready versions
September 3?6, 2019: Conference dates
[4] Paper submission
Submissions must be full technical papers on substantial, original,
and previously unpublished research. Papers can have a maximum length
of 12 pages. All papers should be prepared according to the formatting
instructions of Springer LNAI series. Authors should omit their names
from the submitted papers, and should take reasonable care to avoid
indirectly disclosing their identity. References to own work may be
included in the paper, as long as referred to in the third person. All
papers should be submitted in PDF format through the conference
management website at:
https://www.easychair.org/conferences/?conf=epia2019
[5] Track Fees:
Track participants must register at the main EPIA 2019 conference. No
extra fee shall be paid for attending this track.
[6] Organizing Committee:
Joaquim F. Ferreira da Silva. Universidade Nova de Lisboa, Portugal.
Altigran Soares da Silva, Universidade Federal do Amzonas
[7] Program Committee:
Adeline Nazarenko ? University of Paris 13, France
Alberto Diaz ? Universidade Complutense de Madrid, Spain
Alberto Simões ? Algoritmi Center - University of Minho, Portugal
Alexandre Rademaker ? IBM / FGV, Brazil
Altigran Silva ? Universidade Federal do Amazonas, Brasil
Aline Villavicencio ? Universidade Federal do Rio Grande do Sul, Brazil
Antoine Doucet ? University of Caen, France
António Branco ? Universidade de Lisboa, Portugal
Béatrice Daille ? University of Nantes, France
Belinda Maia ? Universidade do Porto, Portugal
Bruno Martins ? Instituto Superior Técnico ? Universidade de Lisboa, Portugal
Eric de La Clergerie ? INRIA, France
Fernando Batista ? Instituto Universitário de Lisboa, Portugal
Francisco Couto ? Faculdade de Ciências ? Universidade de Lisboa, Portugal
Gaël Dias ? University of Caen Basse-Normandie
Hugo Oliveira ? Universidade de Coimbra, Portugal
Irene Rodrigues ? Universidade de Évora, Portugal
Jesús Vilares ? University of A Coruña, Spain
Joaquim Ferreira da Silva ? Faculdade de Ciências e Tecnologia ?
Universidade Nova de Lisboa)
Katerzyna Wegrzyn-Wolska ? ESIGETEL, France
Luciano Baebosa ? Universidade Federal de Pernambuco, Brazil
Luisa Coheur ? Universidade Técnica de Lisboa, Portugal
Manuel Vilares Ferro ? University of Vigo, Spain
Mário Silva ? Instituto Superior Técnico ? Universidade de Lisboa, Portugal
Mohand Boughanem ? University of Toulouse III, France
Nuno Marques ? Universidade Nova de Lisboa, Portugal
Pablo Gamallo ? Faculdade de Filologia, Santiago de Compustela, Spain
Paulo Quaresma ? Universidade de Évora, Portugal
Pavel Brazdil ? University of Porto, Portugal
Pável Calado ? Instituto Superior Técnico ? Universidade de Lisboa, Portugal
Sebastião Pais, Universidade da Beira Interior, Portugal
Sérgio Nunes ? Faculdade de Engenharia ? Universidade do Porto, Portugal
Vitor Jorge Rocio ? Universidade Aberta, Portugal
[8] Contacts
Joaquim Francisco Ferreira da Silva, DI/FCT/UNL, Quinta da Torre,
2829?516, Caparica, Portugal. Tel: +351 21 294 8536 (ext. 10732) ?
Fax: +351 21 294 8541 ? E?mail: jfs [at]fct [dot] unl [dot] pt
Back | Top |
Call for Papers
CBMI 2019 - Dublin, Ireland, 4-6 Sept 2019
International Conference on Content-Based Multimedia Indexing
http://cbmi2019.org/
CBMI is the annual conference that brings together the various communities
involved in all aspects of content-based multimedia indexing for retrieval,
browsing, management, visualization and analytics. After 15 successful
editions of the CBMI workshop, CBMI became a conference in 2018 and the next
edition will take place in Dublin, Ireland from 4-6 September 2019. The
scientific program will include invited keynote talks, regular papers,
demonstration papers and three special sessions on ?Medical Image Mining and
Health? (MIME), ?Signals And Multimedia? (SAM), and ?Multimedia Indexing for
Comics? (MIC).
Authors are encouraged to submit previously unpublished research papers in the
broad field of content-based multimedia indexing and applications using the
CBMI 2019 submission system: https://www.conftool.org/cbmi2019/. We wish to
highlight significant contributions addressing the main problems of search and
retrieval but also the related and equally important issues of multimedia
content management, user interaction, large-scale search, machine learning in
retrieval, social media indexing and retrieval.
Authors can submit full length (6 pages ? to be presented as oral
presentation) or short papers (4 pages ? to be presented as posters) to the
regular or special sessions. All paper limits are assumed to include
references. Additionally demonstration papers (up to 4 pages) may also be
submitted that highlight interesting and novel demos of CBMI-related
technologies. The submissions are peer reviewed in a single blind process. The
language of the conference is English. The CBMI 2019 conference adheres to the
IEEE paper formatting guidelines. When preparing your submission, please
follow the IEEE guidelines given by IEEE at the Manuscript Templates for
Conference Proceedings.
The CBMI proceedings are traditionally indexed and distributed by IEEE Xplore
and ACM DL. In addition, authors of the best papers of the conference will be
invited to submit extended versions of their contributions to a special issue
of a leading journal in the field.
Topics of interest include, but are not limited to, the following:
? Audio,visual and multimedia indexing;
? Multimodal and cross-modal indexing;
? Deep learning for multimedia indexing;
? Visual content extraction;
? Audio (speech, music, etc.) content extraction;
? Identification and tracking of semantic regions and events;
? Social media analysis;
? Metadata generation, coding and transformation;
? Multimedia information retrieval (image, audio, video,
text);
? Mobile media retrieval;
? Event-based media processing and retrieval;
? Affective/emotional interaction or interfaces for multimedia
retrieval;
? Multimedia data mining and analytics;
? Multimedia recommendation;
? Large scale multimedia database management;
? Summarization, browsing and organization of multimedia
content;
? Personalization and content adaptation;
? User interaction and relevance feedback;
? Multimedia interfaces, presentation and visualization tools;
? Evaluation and benchmarking of multimedia retrieval systems;
? Applications of multimedia retrieval, e.g., medicine,
lifelogs, satellite imagery, video surveillance;
? Cultural heritage applications.
Important dates:
Full/short paper submission: May 04, 2019
Demo paper submission: May 04, 2019
Special sessions paper submission: May 04, 2019
Notification of acceptance: June 18, 2019
Camera-ready papers due: June 29, 2019
Back | Top |
TSD 2019 - LAST CALL FOR PAPERS
**************************************************************************
The twenty-second International Conference on
TEXT, SPEECH and DIALOGUE (TSD 2019)
Ljubljana, Slovenia
September 10-13, 2019
http://www.tsdconference.org
IMPORTANT
The submission deadline March 31 is approaching. We will not extend the deadline.
However, in case you need some extra days, please let us know and do
the following: Register yourself and submit your paper with a valid abstract.
Put 'UNFINISHED' as the first word in the abstract (both in the system
and in the paper). We need at least the abstract to organize the reviews.
When you finish your work, please, update the paper.
TSD HIGHLIGHTS
* Keynote speakers:
Denis Jouvet (Loria, Nancy, France),
Aline Villavicencio (University of Essex, UK),
Bhiksha Raj (Carnegie Mellon University, USA),
Ryan Cotterell (University of Cambridge, UK).
* TSD is traditionally published by Springer-Verlag and regularly listed in
all major citation databases: Thomson Reuters Conference Proceedings
Citation Index, DBLP, SCOPUS, EI, INSPEC, COMPENDEX, etc.
* The TSD2019 conference is officially recognized as an INTERSPEECH 2019
satellite event.
* The TSD2019 conference is supported by the International Speech
Communication Association (ISCA). It holds the status of an ISCA
Supported Event.
* TSD offers a high-standard transparent review process - double blind,
final reviewers' discussion.
* TSD is going to take place in the beautiful centre of Ljubljana, the
capital of Slovenia.
* The conference is organized in cooperation with the Faculty of Electrical
Engineering, University of Ljubljana, Slovenia.
* TSD provides an all-service package (conference access and material, all
meals, one social event, etc.) for an easily affordable fee.
IMPORTANT DATES
March 31, 2019 ............... Deadline for submission of contributions
May 10, 2019 ................. Notification of acceptance or rejection
May 31, 2019 ................. Deadline for submission of camera-ready papers
September 10-13, 2019 ........ TSD2019 conference date
The proceedings will be provided on flash drives in form of navigable
content. Printed books will be available for extra fee.
TSD SERIES
The TSD series has evolved as a prime forum for interaction between
researchers in both spoken and written language processing from all over
the world. Proceedings of the TSD conference form a book published by
Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI)
series. The TSD proceedings are regularly indexed by Thomson Reuters
Conference Proceedings Citation Index. LNAI series are listed in all major
citation databases such as DBLP, SCOPUS, EI, INSPEC, or COMPENDEX.
TOPICS
Topics of the 22nd conference will include (but are not limited to):
Speech Recognition (multilingual, continuous, emotional speech,
handicapped speaker, out-of-vocabulary words, alternative way of
feature extraction, new models for acoustic and language modeling).
Corpora and Language Resources (monolingual, multilingual, text, and
spoken corpora, large web corpora, disambiguation, specialized
lexicons, dictionaries).
Speech and Spoken Language Generation (multilingual, high fidelity
speech synthesis, computer singing).
Tagging, Classification and Parsing of Text and Speech (multilingual
processing, sentiment analysis, credibility analysis, automatic text
labeling, summarization, authorship attribution).
Semantic Processing of Text and Speech (information extraction,
information retrieval, data mining, semantic web, knowledge
representation, inference, ontologies, sense disambiguation, plagiarism
detection).
Integrating Applications of Text and Speech Processing (machine
translation, natural language understanding, question-answering
strategies, assistive technologies).
Automatic Dialogue Systems (self-learning, multilingual,
question-answering systems, dialogue strategies, prosody in dialogues).
Multimodal Techniques and Modeling (video processing, facial animation,
visual speech synthesis, user modeling, emotion and personality
modeling).
PROGRAMME COMMITTEE
All programme committee members are listed on the conference web pages
https://www.kiv.zcu.cz/tsd2019/index.php?page=committees
OFFICIAL LANGUAGE
The official language of the event is English, however, papers on issues
related to text and speech processing in languages other than English are
strongly encouraged.
LOCATION
Ljubljana, the Slovenian capital - a city, whose name means `The beloved',
is a great place to visit, although you will not find world renowned
attractions here. Nevertheless, it has history, tradition, style, arts
& culture, an atmosphere that is both Central European and Mediterranean;
many also add the adjectives multilingual and hospitable. Being close to
many of the major sights and attractions of Slovenia, Ljubljana can also be
your starting point to discover the country's diversity.
Ljubljana is situated about halfway between Vienna and Venice. Its
character and appearance have been shaped by diverse cultural influences
and historical events. While in winter it is remarkable for its dreamy
Central European character, it is the relaxed Mediterranean feel that
stands out during summer.
Ljubljana is a picturesque city full of romantic views, with a medieval
castle towering over its historical city centre and a calm river spanned by
a series of beautiful bridges running right through it. It's a city with
a medieval heart, a city of the Baroque and Art Nouveau, with an old castle
resting above it like a sleeping beauty.
In Ljubljana eastern and western cultures met; and the Italian concept of
art combined with the sculptural aesthetics of Central European cathedrals.
The city owes its present appearance partly to Italian baroque and partly
to Art Nouveau, which is the style of the numerous buildings erected
immediately after the earthquake of 1895.
The central point of interest in Ljubljana is the Ljubljana Castle,
watching over the city from the centrally located castle hill. The
beginnings of the medieval castle go back to the 9th century, although the
castle building is first mentioned only in 1144. It gained its present
image after the earthquake of 1511 and following further renovations at the
beginning of the 17th century. At present, a funicular connects the Old
Town to the castle hill, adding an even more convenient access alternative
to the tourist train.
Ljubljana lies at the centre of Slovenia. In the morning you can visit the
stunningly beautiful Lake Bled, Lake Bohinj or Soca Valley in the high
mountainous region of the Alps, and in the evening enjoy the sunset in one
of the charming little towns on the Adriatic coast.
It only takes minutes to reach the peaceful and unspoiled countryside of
the city's green surrounding areas, which offer endless opportunities for
hiking, cycling, fishing and horse riding.
We are very excited of the fact that the TSD conference leaves the Czech
Republic for the first time within its 22-year history and that the TSD2019
is going to take place in such a wonderful location as Ljubljana.
ABOUT CONFERENCE
The conference is organized by the Faculty of Applied Sciences, University
of West Bohemia, Pilsen, the Faculty of Informatics, Masaryk University,
Brno, and the Faculty of Electrical Engineering, University of Ljubljana.
VENUE
Faculty of Electrical Engineering - University of Ljubljana
Trzaska cesta 25
SI-1000 Ljubljana
CONTACT
The preferred way of contacting the conference organizing committee is
writing an e-mail to:
Ms Lucie Tauchenova, TSD2019 Conference Secretary
E-mail: tsd2019@tsdconference.org
Phone: +420 702 994 699
All paper correspondence regarding the conference should be addressed to:
TSD2019 - NTIS P2
Fakulta aplikovanych ved
Zapadoceska univerzita v Plzni
Univerzitni 8
CZ-306 14 Plzen
Czech Republic
Fax: +420 377 632 402 - Please, mark the faxed material with large
capitals 'TSD' on top.
TSD2019 conference website: http://www.tsdconference.org/
Back | Top |
ICNLSP 2019 , the third edition of the International Conference on Natural Language and Speech Processing, which will be held at the University of Trento on September 12th, 13th 2019.
ICNLSP aims to attract contributions related to natural language and speech processing in basic theories and applications as well. Regular and posters sessions will be organized, in addition to keynotes presented by senior international researchers.
This year, a workshop on NLP solutions for under-resourced languages will be held with ICNLSP.
Authors are invited to present their work relevant to the topics of the conference.
The following list includes the topics of ICNLSP 2019 but not limited to:
Signal processing, acoustic modeling
Architecture of speech recognition system
Deep learning for speech recognition
Analysis of speech
Paralinguistics in Speech and Language
Pathological speech and language
Speech coding
Speech comprehension
Summarization
Speech Translation
Speech synthesis
Speaker and language identification
Phonetics, phonology and prosody
Cognition and natural language processing
Text categorization
Sentiment analysis and opinion mining
Computational Social Web
Arabic dialects processing
Under-resourced languages: tools and corpora
New language models
Arabic OCR
Lexical semantics and knowledge representation
Requirements engineering and NLP
NLP tools for software requirements and engineering
Knowledge fundamentals
Knowledge management systems
Information extraction
Data mining and information retrieval
Machine translation
Submission
Papers must be submitted via the online paper submission system Easychair.
https://easychair.org/conferences/?conf=icnlsp2019
Keynote speakers
The workshop on NLP Solutions for Under Resourced Languages NSURL 2019 will be held with ICNLSP 2019 .
Important dates
Submission deadline: 30 April 2019
Notification of acceptance: 15 June 2019
Camera-ready paper due: 10 July 2019
Conference dates: 12, 13 September 2019
Chairs:
Dr. Mourad Abbas
Dr. Abed Alhakim Freihat
Back | Top |
Call for Abstracts: Young Female Researchers in Speech Workshop 2019
What it is about:
The Young Female Researchers in Speech Workshop (YFRSW) is a workshop for women undergraduate and masters students who are currently working in speech science and technology. It is designed to foster interest in research in our field in women at the undergraduate or master level who have not yet committed to getting a PhD in speech science or technology areas, but who have had some research experience in their college and universities via individual or group projects.
The workshop is to be held prior to Interspeech 2019 on SaturdaySeptember 14th, 2019, in Graz, Austria. It will feature panel discussions with PhD students and senior researchers in the field, student poster presentations and a mentoring session. Student poster presentations should give an overview of a current or planned research project in which the student is involved, with an emphasis on promoting discussion.
The workshop is the fourth of its kind, after a successful inaugural event YFRSW 2016, at Interspeech 2016 in San Francisco, USA, YFRSW 2017 at Interspeech 2017 in Stockholm, Sweden, and YFRSW 2018 at Interspeech in Hyderabad, India.
Travel funds are available for students accepted to attend the workshop.
How to submit:
To attend the workshop please send an abstract describing your (planned) research (maximum of 300 words). This abstract should be submitted by email to yfrsw2019@gmail.com by June 1, 2019.
Abstracts will be reviewed by the committee and applicants will be notified as soon as possible. We will emphasize inclusivity although all submissions should be in the core scientific domains covered by Interspeech.
Please direct any questions to: yfrsw2019@gmail.com
Back | Top |
Zero Resource Speech Challenge 2019: TTS without T
Back | Top |
*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*
ASVspoof 2019 CHALLENGE:
Future horizons in spoofed/fake audio detection
http://www.asvspoof.org/
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=
Can you distinguish computer-generated or replayed speech from authentic/bona fide speech? Are you able to design algorithms to detect spoofs/fakes automatically?
Are you concerned with the security of voice-driven interfaces?
Are you searching for new challenges in machine learning and signal processing?
Join ASVspoof 2019 ? the effort to develop next-generation countermeasures for the automatic detection of spoofed/fake audio. In combining the forces of leading research institutes and industry, ASVspoof 2019 encompasses two separate sub-challenges in logical and physical access control, and provides a common database of the most advanced spoofing attacks to date. The aim is to study both the limits and opportunities of spoofing countermeasures in the context of automatic speaker verification and fake audio detection.
CHALLENGE TASK
Given a short audio clip, determine whether it represents authentic/bona fide human speech, or a spoof/fake (replay, synthesized speech or converted voice). You will be provided with a large database of labelled training and development data and will develop machine learning and signal processing countermeasures to distinguish automatically between the two. Countermeasure performance will be evaluated jointly with an automatic speaker verification (ASV) system provided by the organisers.
BACKGROUND:
The ASVspoof 2019 challenge follows on from two previous ASVspoof challenges, held in 2015 and 2017. The 2015 edition focused on spoofed speech generated with text-to-speech (TTS) and voice conversion (VC) technologies. The 2017 edition focused on replay spoofing. The 2019 edition is the first to address all three forms attack and the latest, cutting-edge spoofing attack technology.
ADVANCES:
Today?s state-of-the-art, TTS and VC technologies produce speech signals that are as good as perceptually indistinguishable from bona fide speech. The LOGICAL ACCESS sub-challenge aims to determine whether the advances in TTS and VC pose a greater threat to the reliability of automatic speaker verification and spoofing countermeasure technologies. The PHYSICAL ACCESS sub-challenge builds upon the 2017 edition with a far more controlled evaluation setup which extends the focus of ASVspoof to fake audio detection in, e.g. the manipulation of voice-driven interfaces (smart speakers).
METRICS:
The 2019 edition also adopts a new metric, the tandem detection cost function (t-DCF). Adoption of the t-DCF metric aligns ASVspoof more closely to the field of ASV. The challenge nonetheless focuses on the development of standalone spoofing countermeasures; participation in ASVspoof 2019 does NOT require any expertise in ASV. The equal error rate (EER) used in previous editions remains as a secondary metric, supporting the wider implications of ASVspoof involving fake audio detection.
SCHEDULE:
Training and development data release: 19th December 2018
Evaluation data release: 15th February 2019
Deadline to submit evaluation scores: 22nd February 2019
Organisers return results to participants: 15th March 2019
INTERSPEECH paper submission deadline: 29th March 2019
REGISTRATION:
Registration should be performed once only for each participating entity and by sending an email to registration@asvspoof.org with ?ASVspoof 2019 registration? as the subject line. The mail body should include: (i) the name of the team; (ii) the name of the contact person; (iii) their country; (iv) their status (academic/non-academic), and (v) the challenge scenario(s) for which they wish to participate (indicative only). Data download links will be communicated to registered contact persons only.
MAILING LIST:
Subscribe to general mailing list by sending e-mail with subject line ?subscribe asvspoof2019? to sympa@asvspoof.org. To post messages to the mailing list itself, send e-mails to asvspoof2019@asvspoof.org
ORGANIZERS*:
Junichi Yamagishi, NII, Japan & Univ. of Edinburgh, UK
Massimiliano Todisco, EURECOM, France
Md Sahidullah, Inria, France
Héctor Delgado, EURECOM, France
Xin Wang, National Institute of Informatics, Japan
Nicholas Evans, EURECOM, France
Tomi Kinnunen, University of Eastern Finland, Finland
Kong Aik Lee, NEC, JAPAN
Ville Vestman, University of Eastern Finland, Finland
* Equal contribution
CONTRIBUTORS:
University of Edinburgh, UK; Nara Institute of Science and Technology, Japan, University of Science and Technology of China, China; iFlytek Research, China; Saarland University / DFKI GmbH, Germany; Trinity College Dublin, Ireland; NTT Communication Science Laboratories, Japan; HOYA, Japan; Google LLC (Text-to-Speech team, Google Brain team, Deepmind); University of Avignon, France; Aalto University, Finland; University of Eastern Finland, Finland; EURECOM, France.
FURTHER INFORMATION:
info@asvspoof.org
Back | Top |
The NASA Apollo program relied on a massive team of dedicated scientists, engineers, and specialists working seamlessly together in a cohesive manner to accomplish probably one of mankind’s greatest technological achievements in history. The Fearless Steps Initiative by UTD-CRSS has led to the digitization of 19,000 hours of analog audio data and development of algorithms to extract meaningful information from this multichannel naturalistic data. Further exploring the intricate communication characteristics of problem solving on the scale as complex as going to the moon can lead to the development of novel algorithms beneficial for speech processing and conversational understanding in challenging environments. As an initial step to motivate a streamlined and collaborative effort from the speech and language community, we propose The FEARLESS STEPS (FS-1) Challenge.
Most of the data for the Apollo Missions is unlabeled and has thus far motivated the development of some unsupervised and semi-supervised speech algorithms. The Challenge Tasks for this session encourage the development of such solutions for core speech and language tasks on data with limited ground-truth/low resource availability, and serves as the first step towards extracting high level information from such massive unlabeled corpora.
This edition of the Fearless Steps Challenge with include all or most of the following tasks:
The necessary ground truth labels and transcripts will be provided for the training/development set data.
For more information, Please visit the release website: https://exploreapollo.org/
The Corpus Data can be found at http://fearlesssteps.exploreapollo.org/
Organizers:
Back | Top |
Back | Top |
The organizing committee has the pleasure to invite you to the 1st Automatic Assessment of Parkinsonian Speech Workshop which will be held in Cambridge, MA, USA, at the premises of the Massachusetts Institute of Technology.
Parkinson’s disease affects the cells producing dopamine in the brain. Parkinson’s disease symptoms include muscle rigidity, tremors, and changes in the speech. After diagnosis, treatments can help relieve symptoms, but there is no cure. Thus an early diagnosis is essential and the speech is one of those biomarkers requiring more research to evaluate its potentiality for this purpose.
Despite of the amount of research in the field, there is still room for developing new knowledge, not only about the characteristics of the speech of people affected with Parkinson’s disease, but also about its correlation with the extent of the disease. Automatic systems to evaluate and assess the disease will take advantage of the new knowledge generated in the field to make more accurate and robust systems.
AAPS'2019 aims at fostering interdisciplinary collaboration and interactions among researchers in the field of the automatic assessment of parkinsonian speech, thus reaching the whole scientific community.
Topics of interest include, but are not limited to:
Prospective authors are asked to electronically submit an extended abstract of their contributions using the online management tool. Preliminary papers should be submitted as .pdf documents using the on-line management tool of the workshop, fitted to the linked template with a maximum length of 4 pages (recommended length 1 page), including figures and tables, in English. The submitted documents should include the title and authors' names, affiliations and addresses. In addition, the e-mail address and phone number of the corresponding author should be given. The final version of the paper will be submitted by the authors after its acceptance by the program committee, fitted to the linked template, with a maximum length of 4 pages.
Workshop proceedings will be edited in electronic form with an ISBN. Author registration to the conference is required for accepted papers to be included in the proceedings. An extended version of the best papers presented at the workshop will be eligible for publication in a referred journal.
If you are thinking about submitting your work to the workshop, please, have in mind the deadlines set by the local organizing committee:
Back | Top |
**********************************************************
61st International Symposium ELMAR-2019
**********************************************************
September 23-25, 2019
Zadar, Croatia
Extended paper submission deadline: April 25, 2019
http://www.elmar-zadar.org/
CALL FOR PAPERS
TECHNICAL CO-SPONSORS
IEEE Region 8
IEEE Croatia Section
IEEE Croatia Section SP, AP and MTT Chapters
TOPICS
--> Image and Video Processing
--> Multimedia Communications
--> Speech and Audio Processing
--> Wireless Communications
--> Telecommunications
--> Mobile communications
--> Antennas and Propagation
--> Robotics
--> e-Learning and m-Learning
--> Satellite technologies
--> Radar systems
--> Navigation Systems
--> Ship Electronic Systems
--> Transport systems
--> Power Electronics and Automation
--> Naval Architecture
--> Sea Ecology
--> Special Sessions:
http://www.elmar-zadar.org/2019/index.html#sessions
KEYNOTE SPEAKERS
Prof. Snjezana Rimac Drlje, PhD
Vice-Dean for International Cooperation
Chair of Multimedia Systems and Digital Television
J.J. Strossmayer University of Osijek
Faculty of Electrical Engineering, Computer Science and Information Technology Osijek
Osijek, Croatia
SCHEDULE OF IMPORTANT DATES
Extended deadline for submission of full papers: April 25, 2019
Notification of acceptance mailed out by: May 22, 2019
Submission of (final) camera-ready papers: June 5, 2019
Preliminary program available online by: June 21, 2019
Registration deadline: June 26, 2019
E-mail: elmar2019@fer.hr
http://www.elmar-zadar.org/
============================================================
IEEE Region 8: http://www.ieeer8.org/
To unsubscribe from IEEE conference eNotices, visit the IEEE Communications Preferences web page at
https://www.ieee.org/ieee-privacyportal/ and uncheck the 'Send IEEE Conferences and Events Materials Flag' option.
Unsubscribe from ELMAR eNotices:
Unsubscribe+from+List&LC=ENU&LID=1-UJN8XL&LN=1+Subscription+List
Subscribe to ELMAR eNotices: http://bmsmail2.ieee.org:80/ctd/listsub?RID=1-65XB0YA&CON=1-1R2-2984&PRO=&AID=&OID=1-65XT6UN&CID=1-65XRGKQ&COID=1-65XT6VM&RT=Subscribe+to+List&LC
=ENU&LID=1-UJN8XL&LN=1+Subscription+List
============================================================
Back | Top |
Regular Paper Submission: May 31, 2019 (Extended)
Author Notification: July 18, 2019
Camera Ready Submission: July 30, 2019
Demo Paper Submission: August 10, 2019
Workshop dates: September 27-29, 2019
Back | Top |
Call for participation - FinTOC shared task
? The Second Financial Narrative Processing Workshop (FNP 2019)
?The 22nd Nordic Conference on Computational Linguistics (NoDaLiDa?19)
Task: Predict a Table of Content (ToC) from financial documents.
Two sub-tasks are proposed :
Detection of titles
Prediction of a ToC
Shared task webpage:http://wp.lancs.ac.uk/cfie/shared-task/
Shared task contact: fin.toc.task@gmail.com
Important dates
Registration deadline: June 29, 2019
Submission deadline: July 13, 2019
Workshop day: September 30, 3019
Back | Top |
The Second Financial Narrative Processing Workshop (FNP 2019)
To be held at The 22nd Nordic Conference on Computational Linguistics (NoDaLiDa?19) in Turku, Finland. Workshop URL: http://wp.lancs.ac.uk/cfie/fnp2019/ Shared Task URL: http://wp.lancs.ac.uk/cfie/shared-task/ WORKSHOP DESCRIPTION: Following the success of the First FNP 2018 at LREC18, Japan, we have had a great deal of positive feedback and interest in continuing the development of the financial narrative processing field. This prompted us to hold a training workshop in textual analysis methods for financial narratives that was oversubscribed showing that there is an increasing interest in the subject. As a result, we are now motivated to organise the Second Financial Narrative Processing Workshop, FNP 2019. The workshop will continue focusing on the use of Natural Language Processing (NLP), Machine Learning (ML), and Corpus Linguistics (CL) methods related to all aspects of financial text mining and financial narrative processing (FNP). There is a growing interest in the application of automatic and computer-aided approaches for extracting, s g, and analysing both qualitative and quantitative financial data. In recent years, previous manual small-scale research in the Accounting and Finance literature has been scaled up with the aid of NLP and ML methods, for example to examine approaches to retrieving structured content from financial reports, and to study the causes and consequences of corporate disclosure and financial reporting outcomes. One focal point of the proposed workshop is to develop a better understanding of the determinants of financial disclosure quality and the factors that influence the quality of information disclosed to investors beyond the quantitative data reported in the financial statements. The workshop will also encourage efforts to build resources and tools to help advance the work on financial narrative processing (including content retrieval and classification) due to the dearth of publicly available datasets and the high cost and limited access of content providers. The workshop aims to advance research on the lexical properties and narrative aspects of corporate disclosures, including glossy (PDF) annual reports, US 10-K and 10-Q financial documents, corporate press releases (including earning announcements), conference calls, media articles, social media, etc. For FNP 2019 we are collaborating with Fortia Financial Solutions, a French based company specialised in Financial Investment and Risk management on organising a shared task on automatic detection of financial documents structure as part of FNP 2019. http://wp.lancs.ac.uk/cfie/shared-task/ Systems participating in the shared task can be submitted as short papers to be part of the workshop's proceedings. MOTIVATION AND TOPICS OF INTEREST: Financial narrative disclosures represent a large part of firms overall financial communications with investors. Textual commentaries help to clarify issues obscured by complex accounting methods and footnote disclosures. In addition, narratives summarise corporate strategy,contextualise results, explain governance arrangements, describe corporate social responsibility policy, and provide forward-looking information for investors. They also provide management with an opportunity to obfuscate accounting results and manipulate readers? perceptions of underlying economic performance. ORGANISING COMMITTEE: - General Chair: Dr Mahmoud El-Haj (SCC, Lancaster University, UK) - Program Chairs: Dr Paul Rayson (SCC, Lancaster University,UK)and Prof Steven Young (LUMS, Lancaster University, UK) - Publication Chair: Dr Houda Bouamor (Fortia Financial Solution, France) - Publicity Chairs: Dr Sira Ferradans (Fortia Financial Solution, France), and Dr Cathrine Salzedo (LUMS, Lancaster University, UK)IMPORTANT DATES: March 25, 2019: First Call for Workshop Papers June 5, 2019: Second Call for Workshop Papers August 18, 2019 (Midnight PST): Workshop Paper Submissions Deadline August 18, 2019: Notification of Acceptance September 6, 2019 (Midnight GMT -12): Camera Ready Papers September 18, 2019 Workshop Schedule Monday September 30, 2019: Workshop Date (Half day). CALL FOR PAPERS We invite submissions on topics that include, but are not limited to, the following: Applying core technologies on financial narratives: morphological analysis, disambiguation, tokenization, POS tagging, named entity recognition, chunking, parsing, semantic role labeling, sentiment analysis, document quality and advanced readability metrics etc. Financial narratives resources: dictionaries, annotated data, tools and technologies etc. Given the international nature of the conference, we particularly welcome FNP papers reporting non- English and multilingual research, describing the different regulatory regimes within which companies operate internationally. Submissions may include work in progress as well as finished work. Submissions must have a clear focus on specific issues pertaining to the financial narrative processing whether it is English or multilingual. Descriptions of commercial systems are welcome but authors should be willing to discuss the details of their work. Dual submissions should be disclosed at time of submission.PAPER SUBMISSION INSTRUCTIONS: Submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Submissions may consist of no less than four (4) and up to eight (8) pages of content, plus unlimited references. Accepted papers authors are required to submit a camera ready to be included in the final proceedings. Authors of accepted papers will be notified after the notification of acceptance with further details. Accepted papers will be published on ACL Anthology https://aclanthology.info. The Proceedings will include both oral and poster papers, in the same format. Authors of papers accepted for oral or poster presentation at FNP 2019 must notify the program chairs by the camera-ready deadline as to whether the paper will be presented. We will not accept for publication or presentation the papers that overlap significantly in content or results with papers that will be (or have been) published elsewhere. PROGRAMME COMMITTEE: Andrew Moore (SCC, Lancaster University, UK) Antonio Moreno Sandoval (UAM, Spain) Catherine Salzedo (LUMS, Lancaster University, UK) Denys Proux (Naver Labs, Switzerland) Djamé Seddah (INRIA-Paris, France) Eshrag Refaee (Jazan University, Saudi Arabia) George Giannakopoulos (SKEL Lab ? NCSR Demokritos, Greece) Haithem Afli (Cork Institute of Technology, Ireland) Houda Bouamor (Fortia Financial Solutions, France) Mahmoud El-Haj (SCC, Lancaster University, UK) Marina Litvak (Sami Shamoon College of Engineering, Israel) Martin Walker (University of Manchester, UK) Paul Rayson (SCC, Lancaster University, UK)
Back | Top |
'SpeD 2019' Organizing Committee invites you to attend the 10th Conference on Speech Technology and Human-Computer Dialogue, at Timisoara, Romania. SpeD 2019 celebrates its 10th edition by extending the topics of interest from spoken language technology and human-computer dialogue towards broader, related domains: multimodal signal processing, biosecurity, human-robot interaction and embedded systems.
Furthermore, 'SpeD 2019' conference and international forum will reflect some of the latest tendencies in machine learning for audio, speech, image and multimodal information processing, biometrics and security for IoT, intelligent robots and embedded systems. 'SpeD 2019' will also focus on the most recent applications in these domains.
The series of 'SpeD' conferences is sponsored by IEEE and EURASIP. As all previous editions since 2009, 'SpeD 2019' Proceedings will be indexed by the IEEE Xplore and by Thomson Conference Proceedings Citation Index.
Topics:
Schedule:
Conference website: https://sped.pub.ro/'
Back | Top |
7th INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING
SLSP 2019
Ljubljana, Slovenia
October 14-16, 2019
Co-organized by:
Jo?ef Stefan Institute
Institute for Research Development, Training and Advice (IRDTA), Brussels/London
http://slsp2019.irdta.eu/
**********************************************************************************
AIMS:
SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between the two domains will hopefully happen. In SLSP 2019, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology.
VENUE:
SLSP 2019 will take place in Ljubljana, a charming city full of art and one of the smallest capital cities in Europe. The venue will be:
Jo?ef Stefan Institute
Jamova cesta 39
1000 Ljubljana
Slovenia
SCOPE:
The conference invites submissions discussing the employment of statistical models (including machine learning) within language and speech processing. Topics of either theoretical or applied interest include, but are not limited to:
anaphora and coreference resolution
authorship identification, plagiarism and spam filtering
computer-aided translation
corpora and language resources
data mining and semantic web
information extraction
information retrieval
knowledge representation and ontologies
lexicons and dictionaries
machine translation
multimodal technologies
natural language understanding
neural representation of speech and language
opinion mining and sentiment analysis
parsing
part-of-speech tagging
question-answering systems
semantic role labelling
speaker identification and verification
speech and language generation
speech recognition
speech synthesis
speech transcription
spelling correction
spoken dialogue systems
term extraction
text categorisation
text summarisation
user modeling
STRUCTURE:
SLSP 2019 will consist of:
invited talks
peer-reviewed contributions
posters
INVITED SPEAKERS:
tba
PROGRAMME COMMITTEE: (to be completed)
Pushpak Bhattacharyya (Indian Institute of Technology, Bombay, IN)
Fethi Bougares (University of Le Mans, FR)
Philipp Cimiano (Bielefeld University, DE)
Nikos Fakotakis (University of Patras, GR)
Robert Gaizauskas (University of Sheffield, UK)
Julio Gonzalo (National Distance Education University, ES)
Reinhold Häb-Umbach (Paderborn University, DE)
Julia Hirschberg (Columbia University, US)
Jing Huang (JD AI Research, CN)
Mei-Yuh Hwang (Mobvoi AI Lab, US)
Nancy Ide (Vassar College, US)
Martin Karafiát (Brno University of Technology, CZ)
Vangelis Karkaletsis (National Center for Scientific Research 'Demokritos', GR)
Tomi Kinnunen (University of Eastern Finland, FI)
Carlos Martín-Vide (Rovira i Virgili University, ES, chair)
David Milne (University of Technology Sydney, AU)
Marie-Francine Moens (KU Leuven, BE)
Preslav Nakov (Qatar Computing Research Institute, QA)
Elmar Nöth (University of Erlangen-Nuremberg, DE)
Stephen Pulman (University of Oxford, UK)
Matthew Purver (Queen Mary University of London, UK)
Mats Rooth (Cornell University, US)
Tony Russell-Rose (UX Labs, UK)
Horacio Saggion (Pompeu Fabra University, ES)
Tanja Schultz (University of Bremen, DE)
Efstathios Stamatatos (University of the Aegean, GR)
Erik Tjong Kim Sang (Netherlands eScience Center, NL)
Isabel Trancoso (Instituto Superior Técnico, PT)
Josef van Genabith (German Research Center for Artificial Intelligence, DE)
K. Vijay-Shanker (University of Delaware, US)
Atro Voutilainen (University of Helsinki, FI)
Hsin-Min Wang (Academia Sinica, TW)
Hua Xu (University of Texas, Houston, US)
Edmund S. Yu (Syracuse University, US)
François Yvon (CNRS - Limsi, FR)
Wlodek Zadrozny (University of North Carolina, Charlotte, US)
ORGANIZING COMMITTEE:
Tina An?i? (Ljubljana)
Jan Kralj (Ljubljana)
Matej Martinc (Ljubljana)
Sara Morales (Brussels)
Manuel Parra-Royón (Granada)
Senja Pollak (Ljubljana, co-chair)
Matthew Purver (London)
David Silva (London, co-chair)
Anita Valmarska (Ljubljana)
SUBMISSIONS:
Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (all included) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0).
Submissions have to be uploaded to:
https://easychair.org/conferences/?conf=slsp2019
PUBLICATIONS:
A volume of proceedings published by Springer in the LNCS/LNAI series will be available by the time of the conference.
A special issue of a major journal will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation.
REGISTRATION:
The registration form can be found at:
http://slsp2019.irdta.eu/Registration.php
DEADLINES (all at 23:59 CET):
Paper submission: June 1, 2019
Notification of paper acceptance or rejection: July 8, 2019
Final version of the paper for the LNCS/LNAI proceedings: July 15, 2019
Early registration: July 15, 2019
Late registration: September 30, 2019
Submission to the journal special issue: January 16, 2020
QUESTIONS AND FURTHER INFORMATION:
david@irdta.eu
ACKNOWLEDGMENTS:
Institut 'Jo?ef Stefan'
Institute for Research Development, Training and Advice (IRDTA), Brussels/London
Back | Top |
The 21st ACM International Conference on Multimodal Interaction (ICMI 2019)
Suzhou, Jiangsu, China October 14-18, 2019
ICMI 2019 Doctoral Consortium - Call for Contributions
The goal of the ICMI Doctoral Consortium is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces. We expect to provide some economic support to attendees that will cover part of their costs (travel, registration, etc.).
Who should apply?
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply.
Submission Guidelines
Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials:
1. Extended Abstract: A four-page description of your PhD research plan and progress in the ACM SigConf format. Your extended abstract should follow the same outline, details, and format of the ICMI short papers. The submissions will not be anonymous. In particular, it should cover: o The key research questions and motivation of your research o Background and related work that informs your research o A statement of hypotheses or a description of the scope of the technical problem o Your research plan, outlining stages of system development or series of studies o The research approach and methodology o Your results to date (if any) and a description of remaining work o A statement of research contributions to date (if any) and expected contributions of your PhD work 2. Advisor Letter: A one-page letter of nomination from the student's PhD advisor. This letter is not a letter of support. Instead, it should focus on the student's PhD plan and how the Doctoral Consortium event might contribute to the student's PhD training and research. 3. CV: A two-page curriculum vitae of the student.
All materials should be prepared in PDF format and submitted through the ICMI submission system.
Review Process
The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the
consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women and other underrepresented groups are especially encouraged to apply.
Financial Support
The conference is pleased to offer partial financial support for doctoral students participating in the Doctoral Consortium and attending the conference. Only students who apply and are accepted for participation in the Doctoral Consortium can be considered for financial support. The number and size of the offers of financial support are contingent upon the number of invited student participants.
Attendance
All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their PhD work as a short talk at the Consortium and as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines for the poster session will be available after the camera-ready deadline.
Process
Submission format: Four-page extended abstract using the ACM format (https://www.acm.org/publications/proceedings-template#aL2) Submission system: https://new.precisionconference.com/user/login?society=sigchi/ Selection process: Peer-Reviewed Presentation format: Talk on consortium day and participation in the conference poster session Proceedings: Included in conference proceedings and ACM Digital Library Doctoral Consortium Co-chairs: Daniel McDuff (Microsoft Research) and Kristiina Jokinen (AIST)
Important Dates
Submission deadline June 28th, 2019 (23:59PM, PST) Notifications July 26th, 2019 Camera-ready August 9th, 2019 Doctoral Consortium date October 14th, 2019
Questions?
For more information and updates on the ICMI 2019 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website https://icmi.acm.org/2019/index.php?id=cfdc For further questions, contact the Doctoral Consortium co-chairs:
Daniel McDuff (Microsoft Research) damcduff@microsoft.com Kristiina Jokinen (AI Research Center, AIST Tokyo Waterfront) kristiina.jokinen@aist.go.jp
Back | Top |
Call for Workshops
The International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, Jiangsu, China, during October 14-18, 2019. ICMI is the premier international conference for multidisciplinary research on multimodal human-human and human-computer interaction analysis, interface design, and system development. The theme of the ICMI 2019 conference is Multimodal representation of human behavior in context. ICMI has developed a tradition of hosting workshops in conjunction with the main conference to foster discourse on new research, technologies, social science models and applications. Examples of recent workshops include:
Multi-sensorial Approaches to Human-Food Interaction
Group Interaction Frontiers in Technology
Modeling Cognitive Processes from Multimodal Data
Human-Habitat for Health
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
Investigating Social Interactions with Artificial Agents
Child Computer Interaction
Multimodal Interaction for Education
We are seeking workshop proposals on emerging research areas related to the main conference topics, and those that focus on multi-disciplinary research. We would also strongly encourage workshops that will include a diverse set of keynote speakers (factors to consider include: gender, ethnic background, institutions, years of experience, geography, etc.).
The format, style, and content of accepted workshops are under the control of the workshop organizers. Workshops may be of a half-day or one day in duration. Workshop organizers will be expected to manage the workshop content, be present to moderate the discussion and panels, invite experts in the domain, and maintain a website for the workshop. Workshop papers will be indexed by ACM.
Submission
Prospective workshop organizers are invited to submit proposals in PDF format (Max. 3 pages). Please email proposals to the workshop chairs: Hongwei Ding (hwding@sjtu.edu.cn), Carlos Busso (busso@utdallas.edu) and Tadas Baltrusaitis (tadyla@gmail.com). The proposal should include the following:
Workshop title
List of organizers including affiliation, email address, and short biographies
Workshop motivation, expected outcomes and impact
Tentative list of keynote speakers
Workshop format (by invitation only, call for papers, etc.), anticipated number of talks/posters, workshop duration (half-day or full-day) including tentative program
Planned advertisement means, website hosting, and estimated participation
Paper review procedure (single/double-blind, internal/external, solicited/invited-only, pool of reviewers, etc.)
Paper submission and acceptance deadlines
Special space and equipment requests, if any
Important Dates
Workshop proposal submission: Saturday, February 16, 2019
Notification of acceptance: Saturday, March 2, 2019
Workshop Date: Monday, October 14, 2019
Back | Top |
The 21th ACM International Conference on Multimodal Interaction (ICMI 2019) will be held in Suzhou, China, October 14 to 18, 2019 (http://icmi.acm.org/2019/index.php?id=home), which is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
You are cordially invited to submit high-quality papers of research in the areas of multimodal interaction from behavioral and social sciences, which may help us better understand how technology can be used to increase our scientific knowledge for intelligent interaction. Many thanks.
Abstract Submission (must include title, authors, abstract) |
May 1, 2019 (11:59pm PST) |
Final Submission (final submissions ) |
May 13, 2019 (11:59pm PST) extended |
Back | Top |
Call for Papers
-------------------
Second International Workshop on Multimedia Content Analysis in Sports @ ACM Multimedia, October 21-25, 2019, Nice, France
We'd like to invite you to submit your paper proposals for the 2nd International Workshop on Multimedia Content Analysis in Sports to be held in Nice, France together with ACM Multimedia 2019. The ambition of this workshop is to bring together researchers and practitioners from different disciplines to share ideas on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:
– annotation and indexing
– athlete and object tracking
– activity recognition, classification and evaluation
– event detection and indexing
– performance assessment
– injury analysis and prevention
– data driven analysis in sports
– graphical augmentation and visualization in sports
– automated training assistance
– camera pose and motion tracking
– brave new ideas / extraordinary multimodal solutions
Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of the paper.
Please refer to the workshop website for further information:
http://multimedia-computing.de/mmsports2019/
IMPORTANT DATES
Submission Due: July 8, 2019
Acceptance Notification: August 5, 2019
Camera Ready Submission: August 19, 2019
Workshop Date: TBA; either Oct 21 or Oct 25, 2019
____________________________________________
Prof. Dr. Rainer Lienhart
Multimedia Computing & Computer Vision
Institut für Informatik, Universität Augsburg
Informatik Building N, Room # 1013
Universitätsstr. 6 a, 86159 Augsburg, Germany
email: Rainer.Lienhart@informatik.uni-augsburg.de
phone: +49 (821) 598-5703 cell: +49 (163) 960 5367
Skype: skype@videoanalysis.org. Threema oder FaceTime
____________________________________________
Back | Top |
Second Call for Papers
Conference on
*Rational Approaches in Language Science (RAILS)*
24-26 October, 2019
Saarbruecken, Germany
The language sciences increasingly have in common their adoption of rational
probabilistic approaches, such as Bayesian, Information Theoretic, and Game Theoretic
frameworks. The goal of this conference is to bring together speech and language
researchers whose scientific contributions reflect the full diversity of disciplines and
methodologies ? from speech to discourse, on-line processing to corpus-based
investigation, through to language change and evolution ? that have benefited from, and
share, such rational explanations.
*Keynote speakers:*
*Gerhard Jaeger*, University of Tuebingen
/ Bayesian typology/
*Gina Kuperberg*, Tufts University and Massachusetts General Hospital
/ What a probabilistic computational approach can tell us about us about the
neurobiology of language comprehension/
*Hannah Rohde*, University of Edinburgh
/Why are you telling me this: Comprehension as a process of reverse engineering/
//
*Rory Turnbull**,* University of Hawai?i at M?noa
/ Phonetic reduction, natural selection, and bounded rationality/
We seek submissions from across the language sciences ? including speech science,
theoretical linguistics, empirical linguistics, psycholinguistics and neuroscience,
computational linguistics, as well as language development, change and evolution ? which
apply rational probabilistic explanations to linguistic phenomena, or bring novel
experimental findings to bear on such accounts.
Submissions in the form of a *400 word abstract* (excluding references), in *text
format*, are to be submitted electronically at: http://linguistlist.org/easyabs/rails2019
Submissions open: 1 May
Submissions due: 1 June
Notification of acceptance: 1 July
Conference: October 24-26
Submission will be considered for either oral or poster presentation. Details on the
submission format and procedure are also available at the conference web-page:
http://rails.sfb1102.uni-saarland.de <http://rails.sfb1102.uni-saarland.de/>
Scientific and financial support for this conference comes from the *Collaborative
Research Center SFB1102 ?Information Density and Linguistic Encoding?*:
sfb1102.uni-saarland.de <http://sfb1102.uni-saarland.de>
Conference organizers:
Matthew Crocker (chair)
Bistra Andreeva
Stefania Degaetano-Ortlieb
Vera Demberg
Robin Lemke
Noortje Venhuizen First Call for Papers
en
Back | Top |
‘R-atics6
[paʁi] 7th - 8th November 2019
‘R-atics6
The Laboratory of Phonetics and Phonology of the Université Sorbonne nouvelle organizes the ’R-atics 6 colloquium in Paris the 7th and 8th November 2019. ‘R-atics gathers an international network of researchers working on different issues concerning r sounds and has been previously organized in Nijmegen (The Netherlands), Bruxelles (Belgium), Bozen (Italy), Grenoble (France) and Leeuwarden (Nethrelands). This is an international mid-size specific conference that enhances mutual cooperation in a friendly environment on phonetic, phonological and sociolinguistic aspects of rhotics. The 2019 edition will propose to emphasize, in addition to the usual themes, r's in indigenous languages and the automatic treatment of rhotic consonants variation.
Invited speakers
Martine Adda-Dekker: LPP, CNRS-UMR 7018, Sorbonne nouvelle & Limsi CNRS
Koen Sebreghts: University of Utrecht
Moges Yigezu: University of Addis Ababa
Organizing committee
Didier Demolin: LPP, CNRS-UMR 7018, Sorbonne nouvelle
Alexis Dehais Underdown: LPP, CNRS-UMR 7018, Sorbonne nouvelle
Cédric Gendrot: LPP, CNRS-UMR 7018, Sorbonne nouvelle
Naomi Yamaguchi: LPP, CNRS-UMR 7018, Sorbonne nouvelle
Key dates
Abstract deadline: 1st July 2019.
Authors notification: 15th July 2019
Registration: 30th September 2019
Colloquium: 7th-8th November 2019
CALL FOR PAPERS
We invite contributions specifically to any theme concerning the r and the two themes chosen for this edition: 'r's in indigenous languages and the automatic treatment of the variation of rhotics'.
the diversity of r realizations
the socio-phonetic aspects of r realizations
the production the perception of rhotics in a language or in a language family
the description of rhotics in the world’s languages
the phonological status of r sounds
issues in the acquisition of rhotics in L1 and L2
aspects of corrective phonetics in the pronunciation of r sounds
r sounds in clinical phonetics
Abstracts of 500 words (plus bibliographic references) should be sent in PDF format to https://easychair.org/conferences/?conf=ratics6 by 1 July 2019. Authors will take care to anonymize their communication (do not indicate names and affiliations of authors, do not refer directly to one’s own publications) and specify the type of presentation (oral, poster, indifferent).
Back | Top |
1st International Seminar on the Foundations of Speech (SEFOS): Breathing, Pausing, and the Voice.
Website and Contact
www.sefos.dk, sefos@sdu.dk
Theme
SEFOS is dedicated to the physiological patterns, acoustic signals, and communicative functions of breathing, pausing and the voice - in human-human as well as in human-machine interaction. SEFOS aims to bring together researchers from different disciplines, such as phonetics, phonology, psychology, medicine, acoustics, speech technology, and computer linguistics.
Location and Venue
SEFOS will be held from 1st - 3rd December at the University of Southern Denmark, Campus Sønderborg.
Keynote speakers
Plinio Barbosa, Department of Linguistics, Unicamp, Campinas, Brazil.
'Stylistic and cross-linguistic differences in the prosodic organization of breathing, stressing, and pausing'
Jens Edlund, Department of Speech, Music and Hearing, KTH Royal Institute of Technology, Stockholm, Sweden.
'Breathing in interaction between humans and between humans, machines and robots'
Donna Erickson, Kanazawa Medical University, Kanazawa, Japan / Haskins Laboratories, USA
'Our voice: A multifaceted finely-tuned instrument for any occasion and culture'
Call for papers
We invite contributions on all the various aspects of breathing, pausing, and the voice. Reflecting the cross-disciplinary nature of these fields of research, we are particularly pleased about submissions from the entire speech sciences and beyond, i.e., for example, medicine, rhetoric, technology, music, and zoology. Topics of interest include (but are not limited to):
acoustic and physiological analyses of speech breathing
breathing, pausing, and phonation patterns under different mental, emotional, physical conditions
pathological/clinical aspects of breathing, pausing, and the voice, including pain.
personality traits (including attractiveness), speaking styles, and their links to breathing, pausing, and the voice
breathing, pausing, and/or voice patterns in human-machine-interaction and speech technology in general
interrelations between breathing, pausing, and the voice
interrelations with other features of prosody, such as F0 and intensity
silent, fluent, and disfluent pauses, hesitation phenomena
breathing, pausing and interaction, turn-taking, discourse control
forms and functions of voice quality in communication
singing and its relation to phonation and breathing
new technological or methodological development on the analysis of breathing, pausing, and the voice
recourses and corpora
Submissions to SEFOS should be made in the form of extended abstracts, consisting of 2 pages of text plus a third page for (additional) figures and references. The abstract template is provided on the SEFOS website: www.sefos.dk.
Please submit your extended abstract to sefos@sdu.dk and indicate in your email whether you would prefer an oral or a poster presentation (A0 portrait).
Submission and deadlines
Deadline for submissions is the 30th of September. This deadline will NOT be extended!
Submissions will be subject to anonymous peer review by two reviewers. Information about acceptance/rejection will be provided on the 15th of October.
Note that the peer-review is only to check whether the submissions meet basic scientific standards concerning method and analysis. No papers that meet these standards will be excluded unless the maximum number of submissions (100) is exceeded. Therefore, we strongly encourage authors to register for SEFOS when submitting the paper or even earlier. All authors who register for SEFOS until 30 June only pay the reduced student fee of € 90,- !
Accepted contributions will have the opportunity to hand in revised abstracted until one week before the seminar. Accepted papers will be included on a SEFOS Proceedings USB drive (with ISBN number).
Note that authors of selected papers will have the opportunity to publish a full paper in a special issue of the International Journal of Linguistics (Acta Linguistica Hafniensia): https://www.tandfonline.com/action/journalInformation?show=aimsScope&journalCode=salh20
Registration fees
Regular participant: 150 €
Student participant (incl. PhD students): 90 €
The registration fee includes the SEFOS Proceedings, a conference bag, and free lunch breaks (food and soft drinks), a welcome reception, and a social event.
Organizing committee and contact
Oliver Niebuhr: Mads Clausen Institute, SDU Electrical Engineering, Centre for Industrial Electronics, University of Southern Denmark, Denmark.
Jana Neitsch: Mads Clausen Institute, SDU Electrical Engineering, Centre for Industrial Electronics, University of Southern Denmark, Denmark.
Kerstin Fischer: Dept. of Design and Communication, University of Southern Denmark, Denmark.
Jan Michalsky: Chair of Technology Management, Friedrich-Alexander-University Nuremberg, Germany
Stephanie Berger: Dept. of General Linguistics, Institute of Scandinavian Languages, Frisian, and General Linguistics, Kiel University, Germany.
Back | Top |
Call for Papers
The ASRU Workshop is a flagship event of IEEE Speech and Language Processing Technical Committee. The workshop is held every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding. Topics of interest include, but not limited to, the following,
The workshop will feature invited talks/keynotes, regular papers and special sessions. All papers will be presented as posters. A full social program will provide ample opportunities for discussion, including welcome reception, banquet, lunches, etc.
Paper submission portal will be available by 8 May 2019. The ASRU 2019 paper submission and review process is being conducted in a manner similar to previous ASRU workshops. Paper submission kit is available on http://asru2019.org
Back | Top |
Dialog System Technology Challenge 7 (DSTC7)
Call for Participation: Data distribution has been started
Website: http://workshop.colips.org/dstc7/index.html
========================================
Background
-----------------
The DSTC shared tasks have provided common testbeds for the dialog
research community since 2013.
From its sixth edition, it has been rebranded as 'Dialog System
Technology Challenge' to cover a wider variety of dialog related problems.
For this year's challenge, we opened the call for track proposals and
selected the following three parallel tracks by peer-reviews:
- Sentence Selection Track
- Sentence Generation Track
- Audio Visual Scene-aware dialog (AVSD) Track
Participation is welcomed from any research team (academic, corporate,
non-profit, government).
Important Dates
------------------------
- Jun 1, 2018: Training data is released
- Sep 10, 2018: Test data is released
- Sep 24, 2018: Entry submission deadline
- Oct or Nov 2018: Paper submission deadline
- Spring 2019: DSTC7 special session or workshop (venue: TBD)
DSTC7 Organizing Committee
--------------------------------------------
- Koichiro Yoshino - Nara Institute of Science and Technology (NAIST), Japan
- Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA
- Julien Perez - Naver Labs Europe, France
- Luis Fernando D'Haro - Institute for Infocomm Research (I2R), Singapore
DSTC7 Track Organizers
-------------------------------------
Sentence Selection Track:
- Lazaros Polymenakos - IBM Research, USA
- Chulaka Gunasekara - IBM Research, USA
- Walter S. Lasecki - University of Michigan, USA
- Jonathan Kummerfeld - University of Michigan, USA
Sentence Generation Track:
- Michel Galley - Microsoft Research AI&R, USA
- Chris Brockett - Microsoft Research AI&R, USA
- Jianfeng Gao - Microsoft Research AI&R, USA
- Bill Dolan - Microsoft Research AI&R, USA
Audio Visual Scene-aware dialog (AVSD) Track:
- Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA
- Tim K. Marks - Mitsubishi Electric Research Laboratories (MERL), USA
- Devi Parikh - Georgia Tech, USA
- Dhruv Batra - Georgia Tech, USA
DSTC Steering Committee
---------------------------------------
- Jason Williams - Microsoft Research (MSR), USA
- Rafael E. Banchs - Institute for Infocomm Research (I2R), Singapore
- Seokhwan Kim - Adobe Research, USA
- Matthew Henderson - PolyAI, Singapore
- Verena Rieser - Heriot-Watt University, UK
Contact Information
---------------------------------------
Join the DSTC mailing list to get the latest updates about DSTC7:
- To join the mailing list: send an email to
listserv@lists.research.microsoft.com and put 'subscribe DSTC' in the
body of the message (without the quotes).
- To post a message: send your message to dstc@lists.research.microsoft.com.
For specific enquiries about DSTC7:
- Please feel free to contact any of the Organizing Committee members
directly.
Back | Top |
LREC 2020, 12th Conference on Language Resources and Evaluation -
Palais du Pharo, Marseille, France
11-16 May 2020
Main Conference: 13-14-15 May 2020
Workshops and Tutorials: 11-12 & 16 May 2020
Conference web site: https://lrec2020.lrec-conf.org/
Twitter: @LREC2020
FIRST CALL FOR PAPERS
The European Language Resources Association (ELRA) is glad to announce the 12th edition of LREC, organised with the support of national and international organisations among which AFCP, AILC, ATALA, CLARIN, ILCB, LDC, ...
CONFERENCE AIMS
LREC is the major event on Language Resources (LRs) and Evaluation for Human Language Technologies (HLT). LREC aims to provide an overview of the state-of-the-art, explore new R&D directions and emerging trends, exchange information regarding LRs and their applications, evaluation methodologies and tools, on-going and planned activities, industrial uses and needs, requirements coming from e-science and e-society, with respect both to policy issues as well as to scientific/technological and organisational ones.
LREC provides a unique forum for researchers, industrials and funding agencies from across a wide spectrum of areas to discuss issues and opportunities, find new synergies and promote initiatives for international cooperation, in support of investigations in language sciences, progress in language technologies (LT) and development of corresponding products, services and applications, and standards.
CONFERENCE TOPICS
Issues in the design, construction and use of LRs: text, speech, sign, gesture, image, in single or multimodal/multimedia data
Exploitation of LRs in systems and applications
LRs in the age of deep neural networks
Issues in LT evaluation
General issues regarding LRs & Evaluation
LREC 2020 HOT TOPICS
Less Resourced and Endangered Languages
Special attention will be devoted to less resourced and endangered languages: it is expected that LREC2020 makes room to activities carried out to support indigenous languages, building on the United Nations/UNESCO International Year of Indigenous Languages being celebrated in 2019.
Language and the Brain
Studying the neural basis of language helps in understanding both language processing and the brain mechanisms. LREC2020 will encourage all submissions addressing language and the brain. Among possible subtopics, submissions could focus on new datasets and resources (neuroimaging, controlled corpora, lexicons, etc.), methods aiming at new multimodal experimentations (e.g. EEG in virtual reality), language processing applications (e.g. brain decoding, brain-computer interfaces), etc.
Machine/Deep Learning
The availability of LRs is a key element of the development of high quality Human Language Technologies based on AI/Machine Learning approaches, and LREC is the best place to get access to this data, in many languages and for many domains. In addition to submissions addressing ML issues based on large quantities of data, those applied to languages for which only small, noisy or sparse data exist are also most welcomed.
DESCRIBE AND SHARE YOUR LRs!
In addition to describing your LRs in the LRE Map ? now a normal step in the submission procedure of many conferences ? LREC recognises the importance of sharing resources and making them available to the community.
When submitting a paper, you will be offered the possibility to share your LRs (data, tools, web-services, etc.), uploading them in a special LREC repository set up by ELRA. Your LRs will be made available to all LREC participants before the conference, to be re-used, compared, analysed. This effort of sharing LRs, linked to the LRE Map for their description, contributes to creating a common repository where everyone can deposit and share data.
PROGRAMME
The Scientific Programme will include invited talks, oral presentations, poster and demo presentations, and panels, in addition to a keynote address by the winner of the Antonio Zampolli Prize.
We will also organise an Industrial Track and a Reproducibility Track: for these there will be separate Calls.
SUBMISSIONS AND DATES
Submission of oral and poster (or poster+demo) papers: 25 November 2019
LREC2020 asks for full papers from 4 pages to 8 pages (plus more pages for references if needed) , which must strictly follow the LREC stylesheet which will be available on the conference website. Papers must be submitted through the LREC2020 submission platform (it uses START from Softconf) and will be peer-reviewed.
Submission of proposals for workshops, tutorials and panels: 24 October 2019
Proposals should be submitted via an online form on the LREC website and will be reviewed by the Programme Committee.
PROCEEDINGS
The Proceedings will include both oral and poster papers, in the same format.
There is no difference in quality between oral and poster presentations. Only the appropriateness of the type of communication (more or less interactive) to the content of the paper will be considered.
LREC 2010, LREC 2012 and LREC 2014 Proceedings are included in the Thomson Reuters Conference Proceedings Citation Index. The other editions are being processed.
LREC Proceedings are indexed in Scopus (Elsevier).
Substantially extended versions of papers selected by reviewers as the most appropriate will be considered for publication in a special issue of the Language Resources and Evaluation Journal published by Springer (a SCI-indexed journal).
CONFERENCE PROGRAMME COMMITTEE
Nicoletta Calzolari ? CNR, Istituto di Linguistica Computazionale ?Antonio Zampolli?, Pisa - Italy (Conference chair)
Frédéric Béchet ? LIS-CNRS, Aix-Marseille University, Marseille- France
Philippe Blache ? CNRS & Aix-Marseille University, Marseille- France
Christopher Cieri ? Linguistic Data Consortium, Philadelphia - USA
Khalid Choukri ? ELRA, Paris - France
Thierry Declerck ? DFKI GmbH, Saarbrücken - Germany
Hitoshi Isahara ? Toyohashi University of Technology, Toyohashi - Japan
Bente Maegaard ? Centre for Language Technology, University of Copenhagen, Copenhagen - Denmark
Joseph Mariani ? LIMSI-CNRS, Orsay - France
Asuncion Moreno ? Universitat Politècnica de Catalunya, Barcelona - Spain
Jan Odijk ? UIL-OTS, Utrecht - The Netherlands
Stelios Piperidis ? Athena Research Center/ILSP, Athens - Greece
CONFERENCE EDITORIAL COMMITTEE
Sara Goggi ? CNR, Istituto di Linguistica Computazionale ?Antonio Zampolli?, Pisa - Italy
Hélène Mazo ? ELDA/ELRA, Paris - France
Back | Top |
FIRST CALL FOR PAPERS
REPROLANG 2020
Shared Task on the Reproduction of Research Results in Science and Technology of Language
(part of LREC 2020 conference)
Marseille, France
May 13-15, 2020
http://wordpress.let.vupr.nl/lrec-reproduction
We are very pleased to announce REPROLANG 2020, the Shared Task on the Reproduction of
Research Results in Science and Technology of Language, organized by ELRA - European
Language Resources Association with the technical support of CLARIN - European Research
Infrastructure for Language Resources and Technology, as part of the LREC 2020 conference.
BACKGROUND
Scientific knowledge is grounded on falsifiable predictions and thus its credibility and
raison d?être relies on the possibility of repeating experiments and getting similar
results as originally
obtained and reported. In many young scientific areas, including ours, acknowledgement
and promotion of the reproduction of research results need very much to be increased.
For this reason, a special track on reproducibility is included into the LREC 2020
conference regular program (side by side with other sessions on other topics) for papers
on reproduction of research results, and the present specific community-wide shared task
is launched to elicit and motivate the spread of scientific work on reproduction. This
initiative builds on the previous pioneer LREC workshops on reproducibility 4REAL 2016
and 4REAL 2018.
SHARED TASK
The shared task is of a new type: it is partly similar to the usual competitive shared
tasks --- in the sense that all participants share a common goal; but it is partly
different to previous shared tasks --- in the sense that its primary focus is on seeking
support and confirmation of previous results, rather than on overcoming those previous
results with superior ones. Thus instead of a competitive shared task, with each
participant struggling for an individual top system that scores as far as possible from a
rough baseline, this will be a cooperative shared task, with participants struggling for
systems that reproduce as close as possible an original complex research experiment and
thus eventually reinforcing the level of reliability on its results by means of their
eventually convergent outcomes. Concomitantly, like with competitive shared tasks, in the
process of participating in the collaborative shared task, new ideas for improvement and
new advances beyond the reproduced results find here an excellent ground to be ignited.
We invite researchers to reproduce the results of a selected set of articles, which have
been offered by the respective authors with their consent to be used for this shared
task. Papers submitted for this task are expected to report on reproduction findings, to
document how the results of the original paper were reproduced, to discuss
reproducibility challenges, to inform on time, space or data requirements found
concerning training and testing, to ponder on lessons learned, to elaborate on
recommendations for best practices, etc.
Submissions that in addition to the reproduction exercise, report also on results of the
replication of the selected tasks with other languages, domains, data sets, models,
methods, algorithms, downstream tasks, etc. are also encouraged. These should permit to
gain insight also into the robustness of the replicated approaches, their learning curves
and potential of incremental performance, their capacity of generalization, their
transferability across experimental circumstances and into eventual real-life usage
scenarios, their suitability to support further progress, etc.
PUBLICATION
LREC conferences have one of the top h5-index scores of research impact among the world
class venues for research on Human Language Technology.
Accepted papers for the shared task will be published in the Proceedings of the LREC 2020
main conference. LREC Proceedings are freely available from ELRA and ACL Anthology. They
are indexed in Scopus (Elsevier) and in DBLP. LREC 2010, LREC 2012 and LREC 2014
Proceedings are included in the Thomson Reuters Conference Proceedings Citation Index
(the other editions are being processed).
Substantially extended versions of papers selected by reviewers as the most appropriate
will be considered for publication in special issues of the Language Resources and
Evaluation Journal published by Springer (a SCI-indexed journal).
IMPORTANT DATES
November 25, 2019: deadline for paper submission (aligned with LREC 2020)
November 27: deadline for projects in gitlab.com to go public
February 14, 2020: notification of acceptance
May 11-16: LREC conference takes place
SELECTED TASKS
The Selection Committee has selected a broad range of papers and tasks.
Chapter A: Lexical processing
Task A.1: Cross-lingual word embeddings
Artetxe, Mikel, Gorka Labaka, and Eneko Agirre. 2018. ?A robust self-learning method for
fully unsupervised cross-lingual mappings of word embeddings?. In Proceedings of the 56th
Annual Meeting of the Association for Computational Linguistics (ACL 2018), pp. 789?798.
http://aclweb.org/anthology/P18-1073
Major reproduction comparables: Accuracy scores (tables 1 to 4).
Task A.2: Named entity embeddings
Newman-Griffis, Denis, Albert M Lai, and Eric Fosler-Lussier. 2018. ?Jointly Embedding
Entities and Text with Distant Supervision?. In Proceedings of The Third Workshop on
Representation Learning for NLP, pp. 195?206.
http://aclweb.org/anthology/W18-3026
Major reproduction comparables: Spearman?s ? scores for semantic similarity predictions
(tables 3 and 4), and accuracy scores (table 6).
Chapter B: Sentence processing
Task B.1: POS tagging
Bohnet, Bernd, Ryan McDonald, Gonçalo Simões, Daniel Andor, Emily Pitler, and Joshua
Maynez. 2018. ?Morphosyntactic Tagging with a Meta-BiLSTM Model over Context Sensitive
Token Encodings?. In Proceedings of the 56th Annual Meeting of the Association for
Computational Linguistics (ACL 2018), pp. 2642?2652.
http://aclweb.org/anthology/P18-1246
Major reproduction comparables: f-score values (tables 2 to 8).
Task B.2: Sentence semantic relatedness
Gupta, Amulya, and Zhu Zhang. 2018. ?To Attend or not to Attend: A Case Study on
Syntactic Structures for Semantic Relatedness?. In Proceedings of the 56th Annual Meeting
of the Association for Computational Linguistics (ACL 2018), pp. 2116?2125.
http://aclweb.org/anthology/P18-1197
Major reproduction comparables: Pearson?s r and Spearman?s ? scores for the semantic
relatedness
(table 1), and f-score values for paraphrase detection (table 2).
Chapter C: Text processing
Task C.1: Relation extraction and classification
Rotsztejn, Jonathan, Nora Hollenstein, and Ce Zhang. 2018. ?ETH-DS3Lab at SemEval-2018
Task 7: Effectively Combining Recurrent and Convolutional Neural Networks for Relation
Classification and Extraction?. In Proceedings of the 12th International Workshop on
Semantic Evaluation (SemEval 2018), pp. 689?696.
http://aclweb.org/anthology/S18-1112
Major reproduction comparables: precision, recall and f-score values (tables 3 and 4).
Task C.2: Privacy preserving representation
Li, Yitong, Timothy Baldwin, and Trevor Cohn. 2018. ?Towards Robust and
Privacy-preserving Text Representations?. In Proceedings of the 56th Annual Meeting of
the Association for Computational Linguistics (ACL 2018), pp. 25-30.
http://aclweb.org/anthology/P18-2005
Major reproduction comparables: POS accuracy scores (tables 1 and 2), and sentiment
analysis
f-score scores (table 3).
Task C.3: Language modelling
Howard, Jeremy, and Sebastian Ruder. 2018. ?Universal Language Model Fine-tuning for Text
Classification?. In Proceedings of the 56th Annual Meeting of the Association for
Computational Linguistics (ACL 2018), pp. 328?339.
http://aclweb.org/anthology/P18-1031
Major reproduction comparables: Error rate (%) scores in sentiment analysis and question
classification tasks (tables 2 and 3).
Chapter D: Applications
Task D.1: Text simplification
Nisioi, Sergiu, Sanja Stajner, Simone Paolo Ponzetto, and Liviu P. Dinu. 2017.
?Exploring Neural Text Simplification Models?. In Proceedings of the 55th Annual Meeting
of the Association for Computational Linguistics (ACL 2017), pp. 85-91.
http://aclweb.org/anthology/P/P17/P17-2014.pdf
Major reproduction comparables: Averaged human evaluation scores, by 3 evaluators,
in 1 to 5 and -2 to +2 scales (table 2).
Task D.2: Language proficiency scoring
Vajjala, Sowmya, and Taraka Rama. 2018. ?Experiments with Universal CEFR classifications?.
In Proceedings of Thirteenth Workshop on Innovative Use of NLP for Building Educational
Applications, pp. 147?153.
http://aclweb.org/anthology/W18-0515
Major reproduction comparables: f-score values (tables 2, 3 and 4).
Task D.3: Neural machine translation
Vanmassenhove, Eva, and Andy Way. 2018. ?SuperNMT: Neural Machine Translation with
Semantic Supersenses and Syntactic Supertags?. In Proceedings of the 56th Annual Meeting
of the Association for Computational Linguistics (ACL 2018), pp. 67?73.
http://aclweb.org/anthology/P18-3010
Major reproduction comparables: BLEU scores (tables 1 and 2; plots in figures 2, 3 and 4).
Chapter E: Language resources
Task E.1: Parallel corpus construction
Brunato, Dominique, Andrea Cimino, Felice Dell'Orletta, and Giulia Venturi. 2016.
?PaCCSS-IT: A Parallel Corpus of Complex-Simple Sentences for Automatic Text
Simplification?. In Proceedings of the 2016 Conference on Empirical Methods in Natural
Language Processing (EMNLP 2016), pp. 351-361.
https://aclweb.org/anthology/D16-1034
Major reproduction comparables: data set.
Participants are expected to obtain the data and tools for the reproduction from the
information provided in the paper. Using the description of the experiment is part of the
reproduction exercise.
SUBMISSION
The START platform of LREC 2020 will be used for the submission of the following required
elements: A paper describing the reproduction effort, and a link to the software and data
used to obtain the results reported in the paper (more details below). The submitted
materials and results will be checked by a CLARIN panel. Papers will be peer-reviewed.
PAPER PREPARATION
REPROLANG 2020 invites the submission of full papers from 4 pages to 8 pages (plus more
pages for references if needed). These submissions must strictly follow the LREC 2020
conference stylesheet which will be available on the conference website.
MATERIALS PREPARATION
To be checked by a CLARIN panel and the submission to be complete, the software used to
obtain the results reported in the paper must be made available as a docker container
through a project in gitlab. Detailed instructions are available at
https://gitlab.com/CLARIN-ERIC/reprolang/ For technical support, the CLARIN team can be
contacted at reprolang-tc@clarin.eu or an issue can be created under
https://gitlab.com/CLARIN-ERIC/reprolang/issues.
Submissions are done via the START conference management system used by LREC 2020 and
include the following elements:
- url address of your gitlab.com project
- url of the tar.gz with the datasets - the md5 checksum of the above tar.gz
- .pdf with the paper, which must include the above url of your gitlab.com project, and
the above commit hash and tag
The project in gitlab.com should be made public within 2 days after the submission
deadline.
PRESENTATION Papers accepted for publication will be presented in a specific session of
the LREC main conference. There is no difference in quality between oral and poster
presentations. Only the appropriateness of the type of communication (more or less
interactive) to the content of the paper will be considered. The format of the
presentations will be decided by the Program Committee. The proceedings will include both
oral and poster papers in the same format.
REGISTRATION
For a selected paper to be included in the programme and to be published in the
proceedings, at least one of its authors must register for the LREC 2020 conference by
the early bird registration deadline. A single registration only covers one paper,
following the general LREC policy on registration. Registration service is to be found at
the LREC 2020 website.
CONTACTS
About the shared task:
Piek Vossen
p.t.j.m.vossen@vu.nl
About the preparation and submission of materials:
reprolang-tc@clarin.eu
REPROLANG 2020 website: http://wordpress.let.vupr.nl/lrec-reproduction
ORGANIZATION
Steering Committee
António Branco, University of Lisbon (chair of Steering Committee)
Nicoletta Calzolari, ILC, Pisa (co-chair of Steering Committee)
Gertjan van Noord, University of Groningen (chair of Task Selection Committee)
Piek Vossen, VU University Amsterdam (chair of Program Committee)
Task Selection Committee
Gertjan van Noord, University of Groningen (chair)
Tim Baldwin, University of Melbourne
António Branco, University of Lisbon
Nicoletta Calzolari, ILC, Pisa
Ça?r? Çöltekin, University of Tuebingen
Nancy Ide, Vassar College, New York
Malvina Nissim, University of Groningen
Stephan Oepen, University of Oslo
Barbara Plank, University of Copenhagen
Piek Vossen, VU University Amsterdam
Dan Zeman, Prague University
Program Committee
several invitations awaiting an answer marked with [!]
Piek Vossen, VU University Amsterdam (chair)
[!]Gilles Adda, LIMSI-CNRS, Paris
[!]Eneko Agirre Basque University
Francis Bond, NanyangTechnical University, Singapore
António Branco, University of Lisbon
Nicoletta Calzolari, ILC, Pisa
Kevin Cohen, University of Colorado Boulder
[!]Thierry Declerck declerck@dfki.de, DFKI Saarbruecken
[!]John McCrae, Galway University
Nancy Ide , Vassar College, New York
[!]Antske Fokkens VU University Amsterdam
Karën Fort, University of Paris-Sorbonne
[!] Cyril Grouin, LIMSI-CNRS, Paris
Mark Liberman, University of Pennsylvania
[!] Margo Mieskis
[!] Aurélie Névéol, LIMSI-CNRS, Paris
Gertjan van Noord, University of Groningen
Stephan Oepen, University of Oslo
[!]Ted Pedersen, University of Minnesota
Senja Pollak, Jozef Stefan Institute, Ljubljana
[!]Paul Rayson, Lancaster University
Martijn Wieling, University of Groningen
Technical Committee
reprolang-tc@clarin.eu
Dieter Van Uytvanck, CLARIN (chair)
André Moreira, CLARIN
Twan Goosen, CLARIN
João Ricardo Silva, CLARIN and University of Lisbon
Luís Gomes, CLARIN and University of Lisbon
Willem Elbers, CLARIN
Back | Top |
Dear SProSIG Members,
We are pleased to announce that the 2020 Speech Prosody conference
will be held in Tokyo, tentatively in late May or early June.
Also, there are two upcoming special sessions relating to prosody, at
ICPhs 2019, with submission deadlines for both of December 4:
'Interacting Channels of Speech - Tune and Text'
https://timo-roettger.weebly.com/icphs---tune-and-text.html and
'Modeling Meaning-Bearing Configurations of Prosodic Features'
http://www.cs.utep.edu/nigel/pconstructions/icphs-configs.html .
We'd also like to take this opportunity to introduce ourselves, the
incoming officers of SProSig for 2018-2020: namely Martine Grice,
Plinio Barbosa, Hongwei Ding, Aoju Chen and myself. We look forward
to serving the membership and are eager to hear your ideas and
suggestions.
Finally, the SProSIG mailing list is now hosted at the University of
Texas at El Paso. Subscription/unsubscription instructions are below.
Mailings will continue to be infrequent and focus on conference
announcements and the like. If you have such information to share,
please contact any of us.
Hongwei Ding, Aoju Chen, Martine Grice, Plinio Barbosa, Nigel Ward
Speech Prosody Special Interest Group www.sprosig.org
This mail was sent through the SProSIG mailing list, which is for
announcements of interest to the speech prosody research community.
Subscribe/unsubscribe at http://listserv.utep.edu/mailman/listinfo/sprosig
Nigel Ward, Professor of Computer Science, University of Texas at El Paso
CCSB 3.0408, +1-915-747-6827
nigel@utep.edu http://www.cs.utep.edu/nigel/
Back | Top |