(2018-06-13) 9th Speech Prosody Conference, Adam Mickiewicz University, Poznan, Poland, NEW DEADLINE
The new deadline for manuscript submission to Speech Prosody 2018 is January 8, 2018.
The 9th Speech Prosody Conference will be held from 13 to 16 June 2018 at Collegium Iuridicum Novum, Adam Mickiewicz University in Poznan, Poland.
The conference theme will be 'Challenges and new prospects on prosody: research and technology', but we invite papers addressing any aspect of the science and technology of prosody.
Speech Prosody, the biennial meeting of the Speech Prosody Special Interest Group (SProSIG) of the International Speech Communication Association (ISCA), is the only recurring international conference focused on prosody as an organizing principle for the social, psychological, linguistic, clinical phonetics and technological aspects of spoken language. Past conferences have been attended by 300-400 researchers including experts representing a range of disciplines including linguistics, acoustics, speech technology, cognitive science, neuroscience, speech therapy, audiology and hearing science, language teaching, computer science, electrical engineering and forensic science.
Important Deadlines
15 September 2017 Submission of workshops & special session proposals (by e-mail) 01 October 2017 Online paper submission opens 15 October 2017 Announcement of special sessions 08 January 2018 Full paper submission deadline 12 February 2018 Notification of paper acceptance 15 March 2018 Early bird registration deadline 15 March 2018 Deadline to upload revised accepted papers and abst
SCIENTIFIC AREA TOPICS
Phonology and phonetics of prosody Rhythm and timing Tone and intonation Cognitive processing and modelling of prosody Interaction between segmental and suprasegmental features Syntax, semantics, and pragmatics Prosody in language and music Acquisition of first, second and third language prosody Prosody in Computer Language Learning systems Speaking style and personality Speaking style and communication settings Prosody in speech recognition and understanding Prosody in speaker characterization and recognition Identification & description of prosody for multilingual dialogue systems Measurements of prosodic parameters Prosody in Audiology and Phoniatrics Forensic voice and language investigation Prosody of sign language
PAPER FORMAT and SUBMISSION
Papers for the Speech Prosody 2018 proceedings should be up to 4 pages of text, plus one page (maximum) for references only. Any paper with text on the 5th page other than references will be rejected. The papers must be submitted via the Easy Chair submission site.
Interspeech 2018 September 2-6, 2018 Hyderabad International Convention Centre (HICC) Hyderabad, India http://interspeech2018.org/
Tutorials at Interspeech 2018
We are happy to announce that the tutorials have been finalized. The Tutorials Committee has worked hard to put together an exciting set of tutorials on cutting-edge topics. The tutorials will be held at HICC on September 2, 2018 (both forenoon and afternoon sessions). The list of tutorials is given below.
Deep Learning Based Speech Separation by DeLiang Wang, The Ohio State University
End-To-End Models for Automatic Speech Recognition by Rohit Prabhavalkar, Google Inc., USA and Tara Sainath, Google Inc., USA
Spoofing Attacks in Automatic Speaker Verification: Analysis and Countermeasures by Haizhou Li (National University of Singapore, Singapore), Hemant A. Patil (Dhirubhai Ambani Institute of Information and Communication Technology, India), Nicholas Evans (EURECOM, France)
Spoken Dialog Technology for Education Domain Applications by Vikram Ramanarayanan, Keelan Evanini, David Suendermann-Oeft (Educational Testing Service R&D, San Francisco, USA)
Information Theory of Deep Learning: What do the Layers of Deep Neural Networks Represent? by Naftali Tishby, Hebrew University of Jerusalem
Articulatory Representations: Measurement, Estimation, and Application to State-of-the-Art Automatic Speech Recognition by Carol Espy-Wilson (University of Maryland, USA), Mark Tiede (Haskins Lab, USA), Hosung Nam (Korea University, Seoul), Vikramjit Mitra (Apple Inc., USA), Ganesh Sivaraman (Pindrop, Atlanta, USA)
Generating Adversarial Examples for Speech and Speaker Recognition and Other Systems by Bhiksha Raj (Language Technologies Institute, Carnegie-Mellon University, USA) and Joseph Keshet (Bar-Ilan University, Israel)
Multimodal Speech and Audio Processing in Audio-Visual Human-Robot Interaction by Petros Maragos (School of E.C.E., National Technical University of Athens, Athens, Greece), Athanasia Zlatintsi (Athena Research Center, Robot Perception and Interaction Unit, Greece)
For more details, visit http://interspeech2018.org/tutorial/
We look forward to your participation in the tutorials.
(2018-09-02) Call for Papers and Proposals for Show & Tell (Interspeech 2018) , Hyberabad, India
Call for Papers and Proposals for Show & Tell
Interspeech is the world's largest and most comprehensive conference on the science and technology of spoken language processing. It will be held in India for the first time. Interspeech conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications. Contributions to all areas of speech science and technology are welcome. In addition to regular oral and poster sessions, Interspeech 2018 will feature plenary talks by internationally renowned experts, tutorials, special sessions, show & tell sessions, and exhibits. The tutorials will be held at the Hyderabad International Convention Centre. A number of satellite events will also take place around Interspeech 2018.
Original papers are solicited in, but not limited to, the following areas: 1. Speech Perception, Production and Acquisition
2. Phonetics, Phonology, and Prosody
3. Analysis of Paralinguistics in Speech and Language
4. Speaker and Language Identification
5. Analysis of Speech and Audio Signals
6. Speech Coding and Enhancement
7. Speech Synthesis and Spoken Language Generation
8. Speech Recognition – Signal Processing, Acoustic Modeling, Robustness, and Adaptation
9. Speech Recognition – Architecture, Search, and Linguistic Components
10. Speech Recognition – Technologies and Systems for New Applications
11. Spoken Dialog Systems and Analysis of Conversation
12. Spoken Language Processing – Translation, Information Retrieval, Summarization, Resources, and Evaluation
A complete list of the scientific area topics including special sessions is available at http://interspeech2018.org/areas-and-topics/
Paper Submission Papers intended for Interspeech 2018 should be up to 4 pages of text. An optional fifth page could be used for references only. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author's kit on the conference web page. Please be aware that Interspeech 2018 will use new templates and submissions will be accepted only in the new format. Submissions may also be accompanied by additional files such as multimedia files, to be included on the proceedings' USB drive. Authors must declare that their contributions are original and have not been submitted elsewhere for publication. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English.
Important Dates Submission portal opens: February 15, 2018 Abstract submission deadline: March 16, 2018 Final paper submission deadline: March 23, 2018 Acceptance/rejection notification: June 3, 2018 Camera-ready paper due: June 17, 2018 Show and Tell proposals due: March 30, 2018 Registration opens: June 10, 2018
(2018-09-02) Cfp Interspeech 2018 Show and Tell, Hyderabad, Telangana, India (updated)
Dear colleague:
Interspeech 2018 will be held in India during September 2-6, 2018, at the Hyderabad International Convention Centre, Hyderabad, India. We invite you to submit your research papers as well as Show & Tell proposals. The call is attached below. For more details, please visit: http://www.interspeech2018.org.
We look forward to your active participation inInterspeech 2018.
Sincerely, Team Interspeech 2018
== Interspeech 2018 September 2-6, 2018, Hyderabad International Convention Centre Hyderabad, Telangana, India http://www.interspeech2018.org
Interspeech 2018: Show & Tell demonstrations
Submission deadline: April 13, 2018
Acceptance notifications sent: June 1, 2018
Camera-ready paper due: June 17, 2018
Interspeech is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. Show & Tell is a special event organized during the conference. Participants are given the opportunity to demonstrate their most recent progress or developments, and interact with the conference attendees in an informal way, such as a demo, mock-up, or any adapted format of their own choice. These contributions must highlight the innovative side of the concept and may relate to a regular paper.
Each accepted proposal paper will be allocated two pages in the conference proceedings. Show & Tell demonstrations will be presented in a dedicated time slot; each presentation space will include one poster board, one table, as well as wireless internet connection and a power outlet. Demonstrations should be based on innovations and fundamental research in areas of speech production, perception, communication, or speech and language technology and systems. Authors are encouraged to submit proposals related to the theme of the main conference: Speech Research for Emerging Markets in Multilingual Societies. Submissions will be peer-reviewed. Reviewers will judge the originality, significance, quality, and clarity of the proposed demonstration. At least one author of each accepted submission must register for and attend the conference, and demonstrate the system during the Show & Tell sessions.
Paper Submission And Preparation Guidelines
Show& Tell papers should be up to 2 pages (including references). The formatshould conform to that defined in the paper preparation guidelines and asdetailed in the INTERSPEECH 2018 Author’s Kit
Submityour papers to Show & Tell using the START V2 system at https://www.softconf.com/i/IS18-ShowAndTell/
Interspeech 2018 September 2-6, 2018, Hyderabad International Convention Centre Hyderabad, Telangana, India http://www.interspeech2018.org
(2018-10-22) 1st International Workshop on Multimedia Content Analysis in Sports, Seoul, Korea
First International Workshop on Multimedia Content Analysis in Sports @ ACM Multimedia, October 22-26, 2018, Seoul, Korea
We'd like to invite you to submit your paper proposals for the 1st International Workshop on Multimedia Content Analysis in Sports to be held in Seoul, Korea together with ACM Multimedia 2018. The ambition of this workshop is to bring together researchers and practitioners from different disciplines to share ideas on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited
to) topics:
– annotation and indexing
– athlete and object tracking
– activity recognition, classification and evaluation
– event detection and indexing
– performance assessment
– injury analysis and prevention
– data driven analysis in sports
– graphical augmentation and visualization in sports
– automated training assistance
– camera pose and motion tracking
– brave new ideas / extraordinary multimodal solutions
Please refer to the workshop website for further information:
Interspeech 2018 New Website; Satellite Events and Workshops
Interspeech 2018
September 2-6, 2018
Hyderabad, India
http://interspeech2018.org/
Interspeech 2018 New Website; Satellite Events and Workshops
Dear colleague:
We would like to draw your attention to the satellite events and workshops that will take place around Interspeech 2018. Kindly check the following website for details: http://interspeech2018.org/program-satellite-events-and-workshops.html
We invite you to submit papers, participate in these events and make them a grand success.
We also invite you to check out the new avatar of Interspeech 2018 webpages: http://interspeech2018.org/index.html
We would like to draw your attention to the following new items:
(2018-07-12) SIGDIAL 2018 CONFERENCE, Melbourne, Australia (Important date change)
2nd CALL FOR PAPERS (updated with submission link, important date change, special session links, and keynote speaker list)
SIGDIAL 2018 CONFERENCE July 12-14, 2018
http://www.sigdial.org/workshops/conference19/
The 19th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2018) will be held at RMIT University, Melbourne, Australia, July 12-14, 2018. SIGDIAL will be co-located with ACL 2018 which will be held July 15-20 at the Melbourne Convention and Exhibition Centre. The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in discourse and dialogue to both academic and industry researchers. Continuing with a series of eighteen successful previous meetings, this conference spans the research interest areas of discourse and dialogue. The conference is sponsored by the SIGDIAL organization, which serves as the Special Interest Group in discourse and dialogue for both ACL and ISCA.
Keynote Speakers: Mari Ostendorf, Manfred Stede, Ingrid Zukerman
TOPICS OF INTEREST We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:
• Discourse Processing Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering, and information retrieval.
• Dialogue Systems Open domain, task oriented dialogue and chat systems. Knowledge graphs and dialogue. Dialogue state tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Spoken, multi-modal, embedded, situated, and text/web based dialogue systems, their components, evaluation and applications.
• Corpora, Tools and Methodology Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora. • Pragmatic and/or Semantic Modeling The pragmatics and/or semantics of discourse and dialogue (i.e. beyond a single sentence).
• Applications of Dialogue and Discourse Processing Technology
SPECIAL SESSIONS SIGDIAL 2018 will include two special sessions: • Conversational Approaches to Information Search, Retrievals, and Presentation
• Physically Situated Dialogue Please see the individual special session pages for additional information and submission details.
In order for papers submitted to special sessions to appear in the SIGDIAL conference proceedings, they must undergo the regular SIGDIAL review process.
SUBMISSIONS The program committee welcomes the submission of long papers, short papers and demo descriptions. Papers submitted as long papers may be accepted as long papers for oral presentation or long papers for poster presentation. Accepted short papers will be presented as posters.
• Long papers must be no longer than eight pages, including title, text, figures and tables. An unlimited number of pages are allowed for references. Two additional pages are allowed for example discourses or dialogues and algorithms.
• Short papers should be no longer than four pages including title, text, figures and tables. An unlimited number of pages are allowed for references. One additional page is allowed for example discourses or dialogues and algorithms.
• Demo descriptions should be no longer than four pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.
Authors are encouraged to also submit additional accompanying materials such as corpora (or corpus examples), demo code, videos, sound files, etc. Multiple Submissions: Papers that have been or will be submitted to other meetings or publications must provide this information (see submission link). SIGDIAL 2018 cannot accept work for publication or presentation that will be (or has been) published elsewhere. Any questions regarding submissions can be sent to programchairs[at]sigdial.org.
Blind Review: Building on last year’s move to anonymous long and short paper submissions, SIGDIAL 2018 will follow the new ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.
Submission Format All long, short, and demonstration submissions must follow the two-column ACL 2018 format. Authors are expected to use the ACL LaTeX style template or Microsoft Word style template from the ACL 2018 conference. Submissions must conform to the official ACL 2018 style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format. Submission Link and Deadlines You have to fill in the submission form in the START system and upload a pdf of your paper before the March 11 deadline. Updates of a final pdf file will be permitted until March 18. The title, authors, abstract and topics submitted on March 11 cannot be changed. https://www.softconf.com/i/sigdial2018/
IMPORTANT NOTE: ADOPTION OF ACL 2018 AUTHOR GUIDELINES
As noted above, SIGDIAL 2018 is adopting the new ACL guidelines for submission and citation for long and short papers. Long and short papers that do not conform to the following guidelines1 will be rejected without review. Preserving Double Blind Review The following rules and guidelines are meant to protect the integrity of double-blind review and ensure that submissions are reviewed fairly. The rules make reference to the anonymity period, which runs from 1 month before the submission deadline up to the date when your paper is either accepted, rejected, or withdrawn. • You may not make a non-anonymized version of your paper available online to the general community (for example, via a preprint server) during the anonymity period. By a version of a paper we understand another paper having essentially the same scientific content but possibly differing in minor details (including title and structure) and/or in length (e.g., an abstract is a version of the paper that it summarizes). • If you have posted a non-anonymized version of your paper online before the start of the anonymity period, you may submit an anonymized version to the conference. The submitted version must not refer to the non-anonymized version, and you must inform the program chair(s) that a non-anonymized 1 From the first ACL 2018 CFP version exists. You may not update the non-anonymized version during the anonymity period, and we ask you not to advertise it on social media or take other actions that would further compromise double-blind reviewing during the anonymity period. • Note that, while you are not prohibited from making a non-anonymous version available online before the start of the anonymity period, this does make doubleblind reviewing more difficult to maintain, and we therefore encourage you to wait until the end of the anonymity period if possible. Citation and Comparison If you are aware of previous research that appears sound and is relevant to your work, you should cite it even if it has not been peer-reviewed, and certainly if it influenced your own work. However, refereed publications take priority over unpublished work reported in preprints. Specifically: • You are expected to cite all refereed publications relevant to your submission, but you may be excused for not knowing about all unpublished work (especially work that has been recently posted and/or is not widely cited). • In cases where a preprint has been superseded by a refereed publication, the refereed publication should be cited in addition to or instead of the preprint version. Papers (whether refereed or not) appearing less than 3 months before the submission deadline are considered contemporaneous to your submission, and you are therefore not obliged to make detailed comparisons that require additional experimentation and/or in-depth analysis. MENTORING Submissions with innovative core ideas that may be in need of language (English) or organizational assistance will be flagged for 'mentoring' and accepted with recommendation to revise with a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication. BEST PAPER AWARDS In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2018 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards. IMPORTANT DATES Long, Short & Demonstration Paper Submission: 11 March 2018 (23:59, GMT-11) Long, Short & Demonstration Final PDF Submission: 18 March 2018 (23:59, GMT-11)
Long, Short & Demonstration Paper Notification: 20 April 2018
Final Paper Submission: 13 May 2018 (23:59, GMT-11) Conference: 12-14 July 2018 SIGDIAL 2018 ORGANIZING COMMITTEE General Chair: Kazunori Komatani, Osaka University, Japan Program Chairs: Diane Litman, University of Pittsburgh, USA Kai Yu, Shanghai Jiao Tong University (SJTU), China Local Chair: Lawrence Cavedon, RMIT University, Australia Sponsorship Chair: Mikio Nakano, Honda Research Institute Japan, Japan Mentoring Chair: Alex Papangelis, Toshiba Research, UK SIGdial President: Jason Williams, Microsoft Research, USA SIGdial Vice President: Kallirroi Georgila, Institute for Creative Technologies, University of Southern California, USA SIGdial Secretary: Vikram Ramanarayanan, Educational Testing Service, USA SIGdial Treasurer: Ethan Selfridge, Interactions Corp, USA
(2018-07-12) SIGDIAL 2018 Conference: Call for Special Sessions, Melbourne, Australia
SIGDIAL 2018 Conference: Call for Special Sessions
Special Session Submission Deadline: January 14, 2018 Special Session Notification: January 26, 2018
The SIGDIAL organizers welcome the submission of special session proposals. A SIGDIAL special session is the length of a regular session at the conference and may be organized as a poster session, a panel session, a poster session with panel discussion, or an oral presentation session. Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions.
The papers submitted to special sessions are handled by special session organizers, but for the submitted papers to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers. The reviewers for the special session papers will be taken from the SIGDIAL program committee itself but taking into account the suggestions of the session organizers, while the program chairs will make acceptance decisions. In other words, special session organizers decide what appears in the session, while the program chairs decide what appears in the proceedings and the rest of the conference program.
We welcome special session proposals on any topic of interest to the discourse and dialogue communities.
Submissions: Those wishing to organize a special session should prepare a two-page proposal containing: a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate in the session; and a requested format (poster/panel/oral session).
These proposals should be sent to conference[at]sigdial.org by the special session proposal deadline. Special session proposals will be reviewed jointly by the general and program co-chairs.
Links: Those wishing to propose a special session may want to look at some of the sessions organized at recent SIGDIAL meetings. http://www.sigdial.org/workshops/conference18/sessions.htm http://articulab.hcii.cs.cmu.edu/sigdial2016/ https://www.aclweb.org/portal/content/multiling-2015-multilingual-summarization-multiple-documents-online-fora-and-call-centre-con
SIGDIAL 2018 Organizing Committee
General Chair: Kazunori Komatani, Osaka University, Japan
Program Chairs: Diane Litman, University of Pittsburgh, USA Kai Yu, Shanghai Jiao Tong University (SJTU), China
Local Chair: Lawrence Cavedon, RMIT University, Australia
Sponsorship Chair: Mikio Nakano, Honda Research Institute Japan, Japan
Mentoring Chair: Alex Papangelis, Toshiba Research, UK
SIGdial President: Jason Williams, Microsoft Research, USA
SIGdial Vice President: Kallirroi Georgila, Institute for Creative Technologies, University of Southern California, USA
SIGdial Secretary: Vikram Ramanarayanan, Educational Testing Service, USA
SIGdial Treasurer: Ethan Selfridge, Interactions Corp, USA
The workshop will feature two keynote speakers to be announced soon.
* Registration:
Registration will open on June 10. There will be a flat all-inclusive fee of 30 GBP for students and 50 GBP for other attendees. The fee will include buffet lunch, coffee breaks, bus to/from the Interspeech area (subject to availability), and a food/drinks event in the evening of September 7.
Seats are likely to fill up fast. If you plan to attend we recommend registering as early as possible.
* Travel grants:We will provide 8 travel grants of 150 GBP each (including free registration). These grants are meant for students and young scientists (age < 35). Exceptionally, researchers in special situations like unemployment or coming from low income-level countries may also apply. To apply, please send the following documents to chimechallenge@gmail.com by June 4th:- your CV,- a proof of your student status,- a cover letter stating whether you plan to enter the CHiME-5 challenge and/or submit a paper to the workshop.We will notify successful applicants by June 8 and transfer the money after the workshop.
ABOUT THE WORKSHOP
CHiME 2018 will bring together researchers from the fields of computational hearing, speech enhancement, acoustic modelling and machine learning to discuss the robustness of speech processing in everyday environments.
As a focus for discussion, the workshop will host the CHiME-5 Speech Separation and Recognition Challenge. To find out more about the challenge, seehttp://spandh.dcs.shef.ac.uk/chime_challenge/.
PAPER SUBMISSION
Relevant research topics include (but are not limited to):
- training schemes: data augmentation, semi-supervised training,
- speaker localization and beamforming,
- single- or multi-microphone enhancement and separation,
- robust features and feature transforms,
- robust acoustic and language modeling,
- robust speech recognition,
- robust speaker and language recognition,
- robust paralinguistics,
- cross-environment or cross-dataset performance analysis,
- environmental background noise modelling.
Papers reporting evaluation results on the CHiME-5 dataset or on other datasets are both welcome.
It gives us great pleasure to announce the official launch of the CHiME-5 Challenge.
CHiME-5 considers the problem of distant-microphone conversational speech recognition in everyday home environments. Speech material was elicited using a dinner party scenario with efforts taken to capture data that is representative of natural conversational speech. Participants may use a single microphone array or multiple distributed arrays.
The Challenge website is now live and contains all the information and data that you will need for participation: - a detailed description of the challenge scenario and recording conditions, - real training and development data, - full instructions for participation and submission.
Baseline software for array synchronization, speech enhancement, and state-of-the-art speech recognition will be provided on March 12.
If you have a question that isn't answered by the website and you expect other participants to have the answer or to be interested by the answer, please post it on the forum. Otherwise, please email us: chimechallenge@gmail.com.
We look forward to your participation.
IMPORTANT DATES
5th March, 2018 ? Training and development set data released 12th March, 2018 ? Baseline recognition system released 10th June, 2018 ? Workshop registration open June/July, 2018 ? Test data released 3rd Aug, 2018 ? Extended abstract and challenge submission deadline 20th Aug, 2018 ? Author notification 31st August, 2018 ? Workshop registration deadline 7th Sept, 2018 ? CHiME-5 Workshop (satellite of Interspeech 2018) and release of results 8th Oct, 2018 ? Final paper (2 to 6 pages)
ORGANISERS
Jon Barker, University of Sheffield Shinji Watanabe, Johns Hopkins University Emmanuel Vincent, Inria
SPONSORS
Google Microsoft Research
SUPPORTED BY
International Speech Communication Association (ISCA) ISCA Robust Speech Processing SIG
CHiME 2018 will bring together researchers from the fields of computational hearing, speech enhancement, acoustic modelling and machine learning to discuss the robustness of speech processing in everyday environments.
As a focus for discussion, the workshop will host the CHiME-5 Speech Separation and Recognition Challenge. To find out more about the challenge, see http://spandh.dcs.shef.ac.uk/chime_challenge/.
PAPER SUBMISSION
Relevant research topics include (but are not limited to): - training schemes: data augmentation, semi-supervised training, - speaker localization and beamforming, - single- or multi-microphone enhancement and separation, - robust features and feature transforms, - robust acoustic and language modeling, - robust speech recognition, - robust speaker and language recognition, - robust paralinguistics, - cross-environment or cross-dataset performance analysis, - environmental background noise modelling.
Papers reporting evaluation results on the CHiME-5 dataset or on other datasets are both welcome.
IMPORTANT DATES
3rd Aug, 2018 Extended abstract submission (2 pages) 20th Aug, 2018 Paper notification 7th Sept, 2018 CHiME-5 Workshop 8th Oct, 2018 Final paper (2 to 6 pages)
ORGANISERS
Jon Barker, University of Sheffield Shinji Watanabe, Johns Hopkins University Emmanuel Vincent, Inria
LOCAL ORGANISER
Simerpreet Kaur, Microsoft
SPONSORS
Microsoft
SUPPORTED BY
International Speech Communication Association (ISCA) ISCA Robust Speech Processing SIG
L?objectif du WACAI (Workshop sur les ?Affects, Compagnons Artificiels et Interactions? (ACAI)) est de réunir les recherches et développements en cours autour des Agents Conversationnels Animés (ACA) et des robots interactifs. Cette année, le WACAI souhaite regrouper une communauté pluridisciplinaire de chercheurs en Informatique Affective, en Sciences Cognitive, en Psychologie Sociale, en Linguistique. La participation des industriels sera encouragée.
Les workshops WACAI, regroupant habituellement entre 50 et 80 personnes, sont organisés par le groupe de travail GT ACAI. Le GT-ACAI (Affects, Compagnons Artificiels et Interactions - https://acai.limsi.fr/doku.php) de l'AFIA a été créé en 2012. Ce groupe de travail a pour objectif d'animer et de structurer les activités de recherche en France autour de ces problématiques. Ses travaux se situent donc à la rencontre de plusieurs domaines scientifiques : les agents virtuels, les agents conversationnels/humain virtuels, l'informatique affective, le traitement des signaux sociaux et la robotique interactive. Les recherches dans ces domaines scientifiques partagent plusieurs questions scientifiques : détection et reconnaissance des comportements sociaux et émotionnels (émotions, attitudes sociales, personnalité, présence, engagement, etc.) ; modèles cognitifs du comportement affectif d'agent « socio-émotionnellement intelligent » pour améliorer/optimiser l'interaction; synthèse de comportements socio-affectifs en fonction du contexte (personnalité et attitude sociale, tâche, environnement, capacité perceptive et expressive du système interactif, etc.) ; prise en compte des émotions/affects/signaux sociaux dans le dialogue homme-machine et dans les environnements virtuels. Son objectif est de regrouper les activités en France autour de l'informatique affective et de l'interaction avec des compagnons artificiels.
Après les précédentes éditions biannuelles du workshop WACAI, organisées successivement à Grenoble (2005), à Toulouse (2006), à Paris (2008), à Lille (2010), à Grenoble (2012), à Rouen (2014) et à Brest (2016), cette nouvelle édition se déroulera à Porquerolles du 13 au 15 juin 2018.
Les contributions attendues, rédigées de préférence en français (ou en anglais si vraiment nécessaire), sont de trois ordres :
- Des articles scientifiques (4 à 8 pages) ;
- Des revues de questions ou des revues de l?état de l?art (4 à 8 pages), notamment sur les liens entre les problématiques communes et les spécificités des communautés ACA et robotique ;
- Des descriptions courtes de réalisations, démonstrations, d?expérimentations en cours, d?application et outils industriels (2 pages) ;
Les contributions sont attendues sur les domaines, thématiques pluridisciplinaires et applications de la recherche suivants (à titre indicatif) :
DOMAINES DE RECHERCHE
- Informatique affective ; traitement informatique des émotions
(2018-06-18) 6th International Symposium on Tonal Aspects of Languages , Berlin, Germany
The Sixth International Symposium on Tonal Aspects of Languages will be held in Berlin, Germany from Monday June 18 to Wednesday June 20, 2018. This symposium follows the successful TAL 2016 conference in Buffalo, NY, USA. TAL 2018 will be organized at Beuth University Berlin conveniently located in the city center close to all major attractions. TAL 2018 is timed after Speech Prosody 2018 in Poznan, Poland, June 13-16, only a quick train ride away.
(2018-06-25) 2018 Jelinek Summer Workshop on Speech and Language Technology Johns Hopkins University, Baltimore, USA
2018 Jelinek Summer Workshop on Speech and Language Technology
We are pleased to invite one page research proposals for a workshop on Machine Learning for Speech and Language Technology at Johns Hopkins University June 25 to August 3, 2018 (Tentative)
CALL FOR PROPOSALS Deadline: Monday, October 9th, 2017.
One-page proposals are invited for the annual Frederick Jelinek Memorial Workshop in Speech and Language Technology. Proposals should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) or related areas of Machine Intelligence, including Computer Vision and Healthcare. Proposals may address emerging topics or long-standing problems. Areas of interest in 2018 include but are not limited to:
* SPEECH TECHNOLOGY: Any aspect of information extraction from speech signals; techniques that generalize in spite of very limited amounts of training data and/or which are robust to input signal variations; techniques for processing of speech in harsh environments, etc.
* NATURAL LANGUAGE PROCESSING: Knowledge discovery from text; new approaches to traditional problems such as syntactic/semantic/pragmatic analysis, machine translation, cross-language information retrieval, summarization, etc.; domain adaptation; integrated language and social analysis; etc. * MULTIMODAL HLT: Joint models of text or speech with sensory data; grounded language learning; applications such as visual question-answering, video summarization, sign language technology, multimedia retrieval, analysis of printed or handwritten text.
* DIALOG AND LANGUAGE UNDERSTANDING: Understanding human-to-human or human-to-computer conversation; dialog management; naturalness of dialog (e.g. sentiment analysis).
* LANGUAGE AND HEALTHCARE: information extraction from electronic health records; speech and language technology in health monitoring; healthcare delivery in hospitals or the home, public health, etc.
These workshops are a continuation of the Johns Hopkins University CLSP summer workshop series, and will be hosted by various partner universities on a rotating basis. The research topics selected for investigation by teams in past workshops should serve as good examples for prospective proposers: http://www.clsp.jhu.edu/workshops/. An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated by October 13th, 2017. Authors passing this initial screening will be invited to an interactive peer-review meeting in Baltimore on November 10-12th, 2017. Proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected at this meeting for the 2018 workshop. We attempt to bring the best researchers to the workshop to collaboratively pursue research on the selected topics. Each topic brings together a diverse team of researchers and students. Authors of successful proposals typically lead these teams. Other senior participants come from academia, industry and government. Graduate student participants familiar with the field are selected in accordance with their demonstrated performance. Undergraduate participants, selected through a national search, are rising star seniors: new to the field and showing outstanding academic promise. If you are interested in participating in the 2018 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed. If a topic in your area of interest is chosen as one of the topics to be pursued next summer, we expect you to be available to participate in the six-week workshop. We are not asking for an ironclad commitment at this juncture, just a good faith commitment that if a project in your area of interest is chosen, you will actively pursue it. We in turn will make a good faith effort to accommodate any personal/logistical needs to make your six-week participation possible. Proposals must be submitted to jsalt2018@clsp.jhu.edu by 5PM EDT on Monday, 10/09/2017.
Atelier 'Temporalité et séquentialité dans les formes sonores' - 27 Juin 2018, Nantes, France.
Atelier
'Temporalité et séquentialité dans les formes sonores', le Mercredi 27 Juin 2018 à Nantes (lieu à confirmer).
Le Laboratoire de Linguistique de Nantes (LLING, UMR6310 Université de Nantes / CNRS) organise, le Mercredi 27 Juin prochain, un atelier d'une journée sur l'analyse des propriétés temporelles et séquentielles des formes sonores (parole, musique). Cet atelier est gratuit et ouvert à tous dans la limite des places disponibles. Merci de vous pré-inscrire par ce formulaire afin de faciliter l'organisation de la journée.
Avec le soutien :
du département Sciences du Langage (UFR Lettres & Langages, Université de Nantes) ;
de l'UFR Lettres & Langages (Université de Nantes) ;
Objectifs de l'atelier :
Cet atelier d'une journée portera sur l'étude des phénomènes de durée et de succession des formes sonores en se focalisant principalement sur la parole mais en s'ouvrant également à des questions liées à l'analyse phonologique formelle ou à la musique. Il est organisé à l'occasion de la soutenance de thèse de Mohammad Abuoudeh à Nantes et permettra, au lendemain de la soutenance, de susciter des échanges autour de problématiques théoriques actuelles dans ces domaines. Les phénomènes liés à la succession des événements sonores et à leurs propriétés temporelles sont un enjeu majeur des études en phonétique et phonologie, aussi bien à l'intérieur de chaque domaine (statut des variations temporelles dans la caractérisation des catégories sonores, dans les mécanismes perceptifs, dans le contrôle en production ; modélisation des phénomènes de gémination ou des contrastes de durée en phonologie) mais aussi en ce qui concerne la conception de l'interface entre les niveaux phonétique et phonologique (variation / catégories, relations continu / discret, représentations temporellement invariantes, degrés de liberté, modèles dynamiques). L'atelier aura pour objet de susciter des échanges et perspectives de collaborations futures entre les participants et sera aussi l'occasion pour des étudiants de Nantes ou d'ailleurs ayant la possibilité de faire le voyage, de profiter de la présence de ces chercheurs pour connaître leurs travaux actuels plus en détails.
La journée sera organisée autour de 8 présentations orales invitées (20 minutes + 10 minutes de questions). Chaque demi-journée sera ponctuée, en dehors des pauses, par des moments d'échanges et de discussions ayant pour objet de susciter des projets de collaboration et / ou des échanges autour des données et modèles présentés.
Intervenants :
Jalal Al-Tamimi (Newcastle University, UK) ;
Eleonora Cavalcante Albano (Universidade Estadual de Campinas, Brésil) ;
Olivier Crouzet (Laboratoire de Linguistique de Nantes, Université de Nantes / CNRS) ;
Elisabeth Delais-Roussarie (Laboratoire de Linguistique de Nantes, Université de Nantes / CNRS) ;
Radwa Fathi (Laboratoire de Linguistique de Nantes, Université de Nantes / CNRS) ;
Rachid Ridouane (Laboratoire de Phonétique et Phonologie, Université Paris 3 / CNRS) ;
Marie Tahon (Laboratoire d'Informatique de l'Université du Maine, Le Mans Université) ;
L'inscription est gratuite mais obligatoire. Les inscriptions sont ouvertes à tous dans la limite des places disponibles. Les membres du LLING (UMR6310) ou de l'AFCP (à jour de leur cotisation) seront prioritaires jusqu'au Jeudi 14 Juin en cas de demande supérieure à la capacité d'accueil. Les inscriptions définitives seront confirmées par mail le Vendredi 15 Juin.
Comité d'Organisation : Olivier Crouzet, Elisabeth Delais-Roussarie.
eNTERFACE 2018, Louvain-la-Neuve, Belgium, July 6th ? 31st, 2018 The 14th Summer Workshop on Multimodal Interfaces enterface18.org
piLab of ELEN/ICTEAM research group at Université Catholique de Louvain invites project proposals for eNTERFACE?18, the 14th Summer Workshop on Multimodal Interfaces, to be held in Louvain-la-Neuve, from July 6th to 31st, 2018. Following the success of the previous eNTERFACE workshops held in Mons (Belgium, 2005), Dubrovnik (Croatia, 2006), Istanbul (Turkey, 2007), Paris (France, 2008), Genova (Italy, 2009), Amsterdam (Netherlands, 2010), Plzen (Czech Republic, 2011), Metz (France, 2012), Lisbon (Portugal, 2013), Bilbao (Spain, 2014), Mons (Belgium, 2015), Twente (Netherlands, 2016), Porto (Portugal, 2017), eNTERFACE?18 aims at continuing and enhancing the tradition of collaborative, localised research and development work by gathering, in a single place, leading researchers in multimodal interfaces and students to work on specific projects for 4 complete weeks.
Topics This year?s special topics will be transmedia storytelling and deep learning for improved interactions. There will be masterclasses around those topics during the workshop. Although not exhaustive, the submitted projects can cover one or several of the topics listed below: - Art and Technology - Affective Computing - Assistive and Rehabilitation Technologies - Assistive Technologies for Education and Social Inclusion - Augmented Reality - Conversational Embodied Agents - Human Behavior Analysis - Human Robot Interaction - Interactive Playgrounds - Innovative Musical Interfaces - Interactive Systems for Artistic Applications - Multimodal Interaction, Signal Analysis and Synthesis - Multimodal Spoken Dialog Systems - Search in Multimedia and Multilingual Documents - Smart Spaces and Environments - Social Signal Processing - Tangible and Gesture Interfaces - Teleoperation and Telerobotics - Wearable Technology - Virtual Reality
Important dates March 10th, 2018 :Submission deadline: 1-page Notification of interest for a project proposal with a summary of project goals, work-packages and deliverables March 24th, 2018 : Submission deadline: Final project proposal March 31st, 2018 : Notification of acceptance to project leaders April 2nd, 2018 : Start Call for Participation, participants can apply for projects May 11th, 2018 : Call for Participation is closed May 14th, 2018 : Notification of acceptance to participants June 1st, 2018 : Teams are built 6th ? 31st July 2018 : eNTERFACE?18 Workshop
Proposals should be submitted to macq@ieee.org. They will be evaluated by eNTERFACE Steering Committee with respect to the suitability to the workshop goals and format. Authors of the accepted proposals will then be invited to build their teams.
Nouvelles Technologies pour l'Exploration de Corpus de Parole
site WEB (en construction) www.bigdata-speech Dates : 9 au 13 juillet 2018 Lieu : Centre de Conférences de la Station Biologique de Roscoff
Thématique :
A l'ère des grandes masses de données, l'école thématique CNRS et LabEx EFL Big Data & Speech vise à donner un aperçu de recherches innovantes en linguistique de l'oral s'appuyant sur de grands corpus de parole. De plus, elle vise à présenter une sélection d'approches, méthodes et outils du traitement automatique de la parole et de la langue, pouvant être utile au linguiste travaillant sur la parole dans des domaines aussi divers que la phonétique, la phonologie, la dialectologie, la typologie, l'acquisition, l'apprentissage des langues, la sociophonétique, les pathologies de la parole. . . Ainsi, un alignement forcé automatique entre signal de parole et transcription manuelle permet d'accélérer de nombreuses étapes de mesure et d'analyse linguistique proprement dite ; des applications mobiles d'enregistrement telles que LIG-Aikuma permettent d'accélérer la collecte de corpus du linguiste de terrain ; les grands corpus collectés pour le traitement automatique et reflétant l'usage de la langue parlée à un moment donné peuvent être précieux pour les linguistes afin de tester hypothèses et théories à plus grande échelle, de quantifier des phénomènes connus ou de découvrir des phénomènes ignorés jusque-là. Lors de cette école, il s'agit, en particulier, de fournir les bases nécessaires a la compréhension et la pratique des méthodes statistiques et neuronales, et de montrer leur intérêt pour répondre à des questionnements scientifiques relatifs à la linguistique de corpus. Dans ce but, la moitié du temps sera consacré à des travaux pratiques. Les questions épistémologiques seront également abordées.
Thèmes et intervenants :
La formation consiste en 4,5 jours de cours magistraux et de travaux pratiques (50% cours, 50% TP) articulés autour des thèmes prioritaires: linguistique de corpus, phonétique et phonologie de corpus, outils et méthodes de traitement automatique de la parole à l'usage des linguistes, fondements de l'apprentissage automatique pour l'analyse de corpus linguistiques, méthodes et outils pour la recherche d'information, questions épistémologiques liées à l?utilisation de méthodes quantitatives en linguistique. Les travaux pratiques (essentiellement à l'aide des toolkits Kaldi, Weka, R, Praat) seront réalisés sur des corpus fournis par les organisateurs. Nous proposons aux participants qui souhaitent travailler sur leurs propres données de prendre contact avec les organisateurs afin de vérifier la faisabilité de leur projet d?étude.
Parmi les intervenants pressentis et/ou confirmés: Alexandre Allauzen, LIMSI, Université Paris-Saclay Nicolas Audibert, LPP, Université Paris 3 Bruno Bachimont, Sorbonne Université / UTC Compiègne Laurent Besacier, LIG, Université UGA Grenoble Maud Ehrmann, EPFL, Lausanne Yannick Estève, LIUM, Université du Maine, Le Mans Cédric Gendrot, LPP, Université Paris 3 Mark Liberman, UPenn, Philadelphia Margaret Renwick, Oxford University ...
Public :
L'école s'adresse prioritairement aux chercheur.e.s, enseignant.e.s-chercheur.e.s et ingénieur.e.s, utilisant des corpus oraux et intéressé.e.s par l'exploitation numérique de leurs données ou souhaitant étendre leurs travaux à des données de taille importante nécessitant un traitement automatique. En fonction des places disponibles, des étudiant.e.s en doctorat et/ou en master sont également encouragé.e.s à s'inscrire. Les formations s'adressent prioritairement à des participant.e.s du domaine des sciences humaines, mais des participant.e.s issu.e.s du domaine des sciences et technologies de l'information sont également les bienvenu.e.s dans la mesure où leurs travaux nécessitent une meilleure prise en compte des enjeux linguistiques liés à la modélisation des données orales.
Pour les agents CNRS, les frais d'inscription et de séjour seront pris en charge par la délégation régionale des participants. Les frais d'inscription prévus seront d'environ 350 ? pour les participants académiques et de 170 ? pour les doctorants, et couvrent l'hébergement, les repas et la participation aux cours et travaux pratiques.
Collocated with ACL 2018, in Melbourne, on July 20th 2018.
Named Entities (NE) play a crucial role in many monolingual and multilingual Natural Language Processing (NLP) and Information Retrieval (IR) tasks, such as document search, clustering, information extraction, etc. The phenomenal growth of the Internet and the dramatic changes in the user demographics, especially among the non-English speaking world, has made identification, association and transformation of Named Entities across languages a critical path problem for most NLP and IR Tasks.
The purpose of this workshop is to bring together researchers interested in various aspects of NEs in natural language text.
Topics of Interest:
This workshop invites original research contributions on all aspects of Named Entities (NEs), including identification, analysis, extraction, mining, transformation and applications to NLP and IR systems. The topics of interest include, but are not limited to the following:
* NE Analysis
- Distributional characteristics of NEs in mono- and multi-lingual text corpus
- Orthographic/phonetic characteristics of NE
* NE Annotated Data
- Annotated data sets in specific languages & Creation experiences
* Monolingual and Multilingual NE Identification & processing
- Named Entity Recognition (approaches & evaluation)
Paper submissions to NEWS 2018 should follow the ACL 2018 paper submission policy, including paper format, blind review policy and title and author format convention. Full papers (research papers) are in two-column format without exceeding eight (8) pages of content plus two (2) extra page for references and short papers (research and task papers) are also in two-column format without exceeding four (4) pages of content plus two (2) extra page for references. Submission must conform to the official ACL 2018 style guidelines. For details, please refer to http://acl2018.org/call-for-papers/#paper-submission-and-templates
(2018-07-23) 2nd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING, Genova, Italy
2nd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING
DeepLearn 2018 Genova, Italy July 23-27, 2018 Organized by: University of Genova IRDTA ? Brussels/London http://grammars.grlmc.com/DeepLearn2018/ *************************************************************** --- Early registration deadline: March 12, 2018 --- *************************************************************** SCOPE: DeepLearn 2018 will be a research training event with a global scope aiming at updating participants about the most recent advances in the critical and fast developing area of deep learning. This is a branch of artificial intelligence covering a spectrum of current exciting machine learning research and industrial innovation that provides more efficient algorithms to deal with large-scale data in neurosciences, computer vision, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, healthcare, recommender systems, learning theory, robotics, games, etc. Renowned academics and industry pioneers will lecture and share their views with the audience. Most deep learning subareas will be displayed, and main challenges identified through 2 keynote lectures, 24 six-hour courses, and 1 round table, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Interaction will be a main component of the event. An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles. ADDRESSED TO: Master's students, PhD students, postdocs, and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2018 is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen and discuss with major researchers, industry leaders and innovators. STRUCTURE: 3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another. VENUE: DeepLearn 2018 will take place in Genova, the capital city of Liguria, inscribed on the UNESCO World Heritage List and with one of the most important ports of the Mediterranean. The venue will be: Porto Antico di Genova ? Centro Congressi Magazzini del Cotone ? Module 10 16128 Genova, Italy KEYNOTE SPEAKERS: tba PROFESSORS AND COURSES: (to be completed) Pierre Baldi (University of California, Irvine), [intermediate/advanced] Deep Learning: Theory, Algorithms, and Applications to the Natural Sciences Thomas Breuel (NVIDIA Corporation), [intermediate] Design and Implementation of Deep Learning Applications Joachim M. Buhmann (Swiss Federal Institute of Technology Zurich), [introductory/advanced] Model Selection by Algorithm Validation Li Deng (Citadel), tba Sergei V. Gleyzer (University of Florida), [introductory/intermediate] Feature Extraction, End-end Deep Learning and Applications to Very Large Scientific Data: Rare Signal Extraction, Uncertainty Estimation and Realtime Machine Learning Applications in Software and Hardware Michael Gschwind (IBM Global Chief Data Office), [introductory/intermediate] Deploying Deep Learning at Enterprise Scale Xiaodong He (Microsoft Research), [intermediate/advanced] Deep Learning for Natural Language Processing and Language-Vision Multimodal Intelligence Namkug Kim (Asan Medical Center), [intermediate] Deep Learning for Computer Aided Detection/Diagnosis in Radiology and Pathology Li Erran Li (Uber ATG), [intermediate/advanced] Deep Reinforcement Learning: Foundations, Recent Advances and Frontiers Dimitris N. Metaxas (Rutgers University), [advanced] Adversarial, Discriminative, Recurrent, and Scalable Deep Learning Methods for Human Motion Analytics, Medical Image Analysis, Scene Understanding and Image Generation Hermann Ney (RWTH Aachen University), [intermediate/advanced] Speech Recognition and Machine Translation: From Statistical Decision Theory to Machine Learning and Deep Neural Networks Jose C. Principe (University of Florida), [introductory/advanced] Cognitive Architectures for Object Recognition in Video Björn Schuller (Imperial College London), [intermediate/advanced] Deep Learning for Signal Analysis Michèle Sebag (French National Center for Scientific Research, Gif-sur-Yvette), [intermediate] Representation Learning, Domain Adaptation and Generative Models with Deep Learning Ponnuthurai N Suganthan (Nanyang Technological University), [introductory/intermediate] Learning Algorithms for Classification, Forecasting and Visual Tracking Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning and Kernel Machines Kenji Suzuki (Tokyo Institute of Technology), [introductory/advanced] Deep Learning in Medical Image Processing, Analysis and Diagnosis Gökhan Tür (Google Research), [intermediate/advanced] Deep Learning in Conversational AI Eric P. Xing (Carnegie Mellon University), [intermediate/advanced] A Statistical Machine Learning Perspective of Deep Learning: Algorithm, Theory, Scalable Computing Ming-Hsuan Yang (University of California, Merced), [intermediate/advanced] Learning to Track Objects Yudong Zhang (Nanjing Normal University), [introductory/intermediate] Convolutional Neural Network and Its Variants OPEN SESSION: An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing title, authors, and summary of the research to david.silva409 (at) yahoo.com by July 15, 2018. INDUSTRIAL SESSION: A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. At least one of the people participating in the demonstration must register for the event. Expressions of interest have to be submitted to david.silva409 (at) yahoo.com by July 15, 2018. EMPLOYERS SESSION: Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. At least one of the people in charge of the search must register for the event. Expressions of interest have to be submitted to david.silva409 (at) yahoo.com by July 15, 2018. ORGANIZING COMMITTEE: Francesco Masulli (Genova, co-chair) Sara Morales (Brussels) Manuel J. Parra-Royón (Granada) David Silva (London, co-chair) REGISTRATION: It has to be done at http://grammars.grlmc.com/DeepLearn2018/registration.php The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish. Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration facility disabled when the capacity of the venue is exhausted. It is highly recommended to register prior to the event. FEES: Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline. ACCOMMODATION: Suggestions for accommodation will be available in due time. CERTIFICATE: A certificate of successful participation in the event will be delivered indicating the number of hours of lectures. QUESTIONS AND FURTHER INFORMATION: david.silva409 (at) yahoo.com ACKNOWLEDGMENTS: Università degli studi di Genova Institute for Research Development, Training and Advice (IRDTA) ? Brussels/London
Associate Prof. Tomi Kinnunen, UEF Dr. Ville Hautamäki, UEF
LECTURERS
Prof. Ville Kyrki, Aalto Univ. (autonomous agents) Dr. Ali Ghadirzadeh, Aalto Univ. & KTH Royal Inst. Tech. (autonomous agents) Dr. Md Sahidullah, INRIA (speech) Dr. Cemal Hanilci, Bursa Tech. Univ. (speech) Dr. Dayana Ribas, Univ. of Zaragoza (speech) Prof. Lauri Mehtätalo, UEF (statistics) Dr. Akihiro Kato, UEF (speech) Dr. Rosa Gonzalez Hautamäki, UEF (speech) Dr. Abraham Woubie, UEF (speech)
COURSE OVERVIEW
University of Eastern Finland (UEF) hosts a number of different summer courses in August 2018. The course on Machine Learning Applied to Speech Technology and Autonomous Agents is co-organized by the School of Computing of the UEF and Department of Automation and Systems Technology of Aalto University. Contact teaching takes place from August 13th to 17th, 2018. Individual optional project work is from August 20th to 24th, 2018.
The course is intended to be self-contained with limited or no prior knowledge of machine learning required. It gives a brief overview of the relevant machine learning concepts and their applications to speech technology and autonomous agents. Despite the two different application themes, there are no parallel sessions but the participants learn about both topics.
The first day includes course introduction, introduction to machine learning, linear mixed models and basics of deep learning for modeling sequential data. The next two days focus on audio topics (speaker & speech recognition, speaker diarization, speech enhancement, audio steganography), while the last two lecture days focus on reinforcement learning and autonomous software and physical agents (robots). The teaching takes place at the Joensuu campus of the UEF.
The speech portion of the course is especially recommended for PhD students (and MSc students close to graduation) who might be already familiar with the basics of signal processing and are interested in obtaining a brief overview of basic principles, state-of-the-art techniques and selected emerging trends, especially in speaker and language recognition.
The reinforcement portion of the course gives basics of how autonomous agents, whether physicsal (robots) or software, can be designed and trained in end-to-end fashion. Portion of the course is useful for any PhD or MSc students close to graduation, who is interested to learn more about the state-of-the-art in artificial intelligence (AI).
GENERAL COURSE INFORMATION
Language of the course is English. The primary target group are PhD and MSc students. The course amounts to either 2 ECTS or 5 ECTS, depending on the mode:
A. Lectures + learning diary (August 13 - 17), total 2 ECTS B. + additional individual project (August 20 - 24), total 5 ECTS
The course contains 5 days of lectures, hands on practicals, project work (1 week) and learning diary. The course will be assessed as pass/fail. Students who pass the course will receive a course certificate.
SOCIAL PROGRAM
The course involves social programme organized by the UEF (details TBA). The activities will mostly be included in your course fee, but some of them may have a small participation fee.
(2018-08-29) 6th international workshop on spoken language technologies for under-resourced languages (SLTU'18), Gurugram, India (Updated)
The 6th international workshop on spoken language technologies for under-resourced languages (SLTU'18) will be held in Gurugram, India on 29-31 August 2018
The workshop on spoken language technologies for under- resourced languages is the sixth in a series of even-year SLTU workshops. Five previous workshops were successfully organized: SLTU'16 in Yogyakarta (Indonesia), SLTU'14 in St. Petersburg (Russia), SLTU'12 in Cape Town (South Africa), SLTU'10 in Penang (Malaysia) and SLTU'08 in Hanoi (Vietnam).
There are more than 6000 languages in the world and only few are well represented digitally. India alone, with a country of 780 spoken languages and 86 different scripts that reflect its incredible diversity, has lost around 250 languages in the last 50 years and many more are at the verge of getting extinct. A major focus of this workshop is on Indo-European and Sino-Tibetan languages, but study on other under resourced languages are also encouraged. The workshop is being planned as a satellite workshop to INTERSPEECH 2018 and it is endorsed by SIGUL (a joint ISCA-ELRA Special Interest Group on Under-resourced Languages).
Prospective authors are invited to submit full-length papers up to 4 pages for technical content (including figures, tables, etc) plus one additional page containing only references before June 15th (submission page with paper templates will be updated soon)
(2018-09-01) 3rd International Workshop for Young Female Researchers in Speech Science & Technology (YFRSW-2018) , Hyderabad, India
YFRSW-2018, Hyderabad, India, September 1, 2018
3rd International Workshop for Young Female Researchers in Speech Science & Technology (YFRSW-2018) Special event of Interspeech 2018, Hyderabad, India
== Important Dates: - Abstract submission opens: 16 April 2018 - Abstract submission closes: 24 May 2018 - Notification of acceptance: 15 June 2018 - Registration deadline: 5 July 2018 - Workshop date: 1 September 2018
== Topic: The aim of this workshop is to bring women undergraduate and masters students, who are currently working in speech science and technology, together at a special event co-located with Interspeech 2018, Hyderabad, India. The workshop will take place on 1 September 2018 from 10am to 5pm followed by a dinner with invited senior members of the Interspeech community. It will feature panel discussions with senior female researchers in the field, student poster presentations and a mentoring session. The workshop is the third of its kind, after a successful inaugural event (YFRSW 2016) at Interspeech 2016 in San Francisco and the second event (YFRSW 2017) in Stockholm, Sweden. It is designed to foster interest in research in our field in women at the undergraduate or master level who have not yet committed to getting a PhD in speech science or technology areas, but who have had some research experience in their college and universities via individual or group projects.
== Call for Participation Abstracts describing the student?s (planned) research (maximum of 300 words) should be submitted by email to
Abstracts will be reviewed by the committee and applicants will be notified by June 15, 2018. Emphasis will be on inclusivity although all submissions should be in the core scientific domains covered by Interspeech.
== Preliminary Program: The workshop will include the following events: * A welcome breakfast with introductions (1h) * A panel of senior women talking about their own research and experiences as women in the speech community (1h) * A panel of senior students who work in the speech area to describe how they became interested in speech research (1h) * A poster session for the students to present their own research (2h) * A coaching session between students and senior women mentors (1h) * A networking lunch for students and senior women (1h)
== Organizing committee: Amber Afshan (UCLA), Kay Berkling (Karlsruhe University), Heidi Christensen (University of Sheffield), Maxine Eskenazi (CMU), Milica Gasic (Cambridge University), Dilek Hakkani-Tür (Google Inc), Preethi Jyothi (IIT Bombay), Esther Klabbers (ReadSpeaker), Lori Lamel (LIMSI CNRS) Yang Liu (University of Texas Dallas) Karen Livescu (Toyota Technological Institute at Chicago), Pratibha Moogi (Samsung Electronics), Emily Mower Provost (University of Michigah), Catharine Oertel (EPFL) Bhuvana Ramabhadran (Google Inc), Odette Scharenborg (M*Modal), Elizabeth Shriberg (Ellipsis Health Inc), Isabel Trancoso (INESC-ID / Instituto Superior Técnico)
The 26th European Signal Processing Conference (EUSIPCO) will be held in Rome, the Eternal City, in Italy from September 3 to September 7, 2018. The flagship conference of the European Association for Signal Processing (EURASIP) will offer a comprehensive technical program addressing all the latest developments in research and technology for signal processing and its applications. It will feature world-class speakers, oral and poster sessions, keynotes and plenaries, exhibitions, demonstrations, tutorials, demo and ongoing work sessions and satellite workshops, and is expected to attract many leading researchers and industry figures from all over the world.
Technical Scope
We invite the submission of original, unpublished technical papers on topics including but not limited to:
- Audio and acoustic signal processing
- Speech and language processing
- Image and video processing
- Multimedia signal processing
- Signal processing theory and methods
- Sensor array and multichannel signal processing
- Signal processing for communications
- Radar and sonar signal processing
- Signal processing over graphs and networks
- Nonlinear signal processing
- Optimization methods
- Machine learning
- Statistical signal processing
- Compressed sensing and sparse modeling
- Bio-medical image and signal processing
- Signal processing for computer vision and robotics
- Computational imaging/Spectral imaging
- Information forensics and security
- Signal processing for power systems
- Signal processing for education
- Bioinformatics and genomics
- Signal processing for big data
- Signal processing for the internet of things
- Design/implementation of signal processing systems
- Other signal processing areas
Accepted papers will be included in IEEE Xplore®. EURASIP Society enforces a ?no-show? policy. Procedures to submit papers, proposals for special sessions, tutorials and satellite workshops are detailed at the EUSIPCO 2018 website (www.eusipco2018.org).
Important dates
Tutorial proposals: 18 February 2018
Satellite Workshop proposals: 21 January 2018
Full paper submissions: 18 February 2018
Notification of paper acceptance: 18 May 2018
Camera-ready papers: 18 June 2018
STUDENT PAPER AWARDS: ?EUSIPCO Best Student Paper Awards? will be presented at the conference banquet. Papers will be selected by a committee composed of area and technical chairs.
TUTORIAL AND SPECIAL SESSION PROPOSALS: Tutorials will be held on September 3, 2018. Brief tutorial proposals should include title, outline, contact information, biography and selected publications for the presenter(s), and a description of the tutorial and material to be distributed to participants. Special session proposals should include title, rationale, session outline, contact information, and a list of invited papers.
3 MINUTE THESIS (3MT):
EUSIPCO 2018 is offering a 3 Minutes Thesis contest, where PhD students have three minutes to present a compelling oration on their thesis and its significance. It is an exercise for students to consolidate their ideas so they can present them concisely to an audience specialized in different signal processing fields.
SATELLIT? WORKSHOP PROPOSALS:
The 2018 edition of EUSIPCO is proud to organize a half day of thematic workshops on Friday, September 7, 2018, after the end of the main conference, which will provide a forum to participate in specific scientific events and present research focused on current innovative topics in signal processing technology and its extension to other fields.
ORGANIZING COMMITTEE:
GENERAL CHAIR
Patrizio Campisi, Roma Tre University, Italy
GENERAL CO-CHAIR
Josef Kittler, University of Surrey, UK
TECHNICAL CO-CHAIRS
Sergio Barbarossa, Sapienza University of Rome, Italy
Moncef Gabbouj, Tampere University of Technology, Finland
Augusto Sarti, Polythecnic University of Milan, Italy
PLENARY TALKS
Lajos Hanzo, University of Southampton, UK
Enrico Magli, Polythecnic University of Turin, Italy
SPECIAL SESSIONS
Paulo Lobato Correia, IST Lisbon, Portugal
Andreas Uhl, Salzburg University, Austria
TUTORIALS AND DEMO
Bulent Sankur, Bogazici University, Turkey
Marco Carli, Roma Tre University, Italy
STUDENT ACTIVITIES CHAIR
Juan Ramon Troncoso-Pastoriza, EPFL, Switzerland
PUBLICATIONS CHAIR
Emanuele Maiorana, Roma Tre University, Italy
FINANCE CHAIR
Francesco De Natale, University of Trento, Italy
PUBLICITY CHAIRS
Carmen Garcia Mateo, University of Vigo, Spain
Stefania Colonnese, Sapienza University of Rome, Italy
(2018-09-04) CBMI-Cf Special sessions, La Rochelle, France
-------------------------------------------------------------------- CONTENT-BASED MULTIMEDIA INDEXING 2018 CALL FOR SPECIAL SESSIONS Special Session Proposals Due: February 19, 2018 --------------------------------------------------------------------
The International Conference on Content-Based Multimedia Indexing (http://cbmi2018.univ-lr.fr/), one of the leading venues in multimedia indexing, will implement in 2018 two to three special sessions featuring contributions on a focused trendy, innovative or frontier topics in multimedia indexing. Special sessions differentiate from regular sessions as they provide focused context for addressing new or emerging research directions, new developments in various application domains, and frontier topics in multimedia indexing and retrieval.
Special sessions will be held as plenary sessions and should typically target 4 to 6 high-qualityoral presentations on the topic addressed. Special session organizers will manage the selection process for their session under the guidance of the special session chairs.
Important Dates
Special Session Proposals Due: February 19, 2018 Notification of Acceptance: March 09, 2018 Special Sessions Paper Submission: May 18. Notification of Acceptance:June 29 2018 Camera-Ready Papers Due: July 13 2018
Submission Instructions
Proposals should be submitted by email to the special session chairs (jenny.benois@labri.fr, guillaume.gravier@irisa.fr), either in plain-text or PDF format. Please include the following information: - Title of the proposed special session - Rationale for the proposal, including target audience - A brief bio and contact information for the organizers - A tentative/confirmed list of invited papers (titles/affiliations/authors; if the case)
(2018-09-04) CfP International Conference on Content-Based Multimedia Indexing (CBMI) 2018 - La Rochelle, France
Call for papers CBMI 2018 - La Rochelle, France 4-6 Sept 2018 International Conference on Content-Based Multimedia Indexing http://cbmi2018.univ-lr.fr/ (Main submission deadline May 04, 2018)
CBMI aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualization and analytics.
After 15 successful editions of the CBMI workshop, the event is now becoming a conference whose next edition will be held in La Rochelle, France. The scientific program will include invited keynote talks and regular, special and demo sessions.
Authors are encouraged to submit previously unpublished research papers in the broad field of content-based multimedia indexing and related applications. Topics of interest include (but are not limited to):
· Semantic multimedia analysis · Summarization and semantic abstraction of multimedia content · Multimedia content characterization and classification · Metadata generation, coding and transformation; · Multimodal and cross-modal indexing and retrieval · Mobile and social media analysis and retrieval · Multimedia recommendation · Multimedia analysis and indexing beyond semantics, e.g. affect, sentiment, interest · Personalization of multimedia content access · Interactive multimedia indexing and retrieval · Evaluation and benchmarking of multimedia retrieval systems · Applications of multimedia indexing and retrieval, e.g. in medicine, lifelogs, satellite imagery, video surveillance and culture
The CBMI proceedings are traditionally indexed and distributed by IEEE Xplore and ACM DL. In addition, authors of the best papers of the conference will be invited to submit extended versions of their contributions to a special issue of Multimedia Tools and Applications journal (MTAP) http://www.springer.com/computer/information+systems+and+applications/journal/11042
Important dates: Full/short paper submission: May 04, 2018 Demo paper submission: May 18, 2018 Special sessions paper submission: May 18, 2018 Notification of acceptance: June 29, 2018 Camera-ready papers due: July 13, 2018
* Important dates: Paper submission: May 18, 2018 Notification of acceptance: June 29, 2018 Camera-ready papers due: July 13, 2018
* Organizers: - Klaus Schoeffmann, Klagenfurt University, Austria, ks@itec.aau.at - Cathal Gurrin, Dublin City University, Ireland, cathal.gurrin@dcu.ie - Stefanos Vrochidis, ITI, CERTH, Greece, stefanos@iti.gr - Oge Marques, Florida Atlantic University, FL, USA, omarques@fau.edu
* Call for papers: Within the last decade we have observed the emergence of multimedia data analysis and indexing in many new domains, including medicine and personal health. For example, in the field of medical surgery, interventional videos are nowadays recorded and stored in a long-term archive, in order to analyze and use them for post-procedural scenarios, such as operation documentation, surgical error analysis, as well as training and teaching surgery techniques. Similarly, in the field of personal health, images from lifelogging cameras are used to track sports activities, to compute calories consumption, and to create memories for elderly people with dementia, for example. Other relevant lines of research include analysis of medical images for diagnosis decision support, multimedia analysis and multimodal interaction with social agents for basic care, video monitoring and multimedia fusion for remote management of patients. This special session aims to bring together researchers working on analysis and indexing of multimedia data in the field of medicine and health and to provide them a venue for sharing novel ideas and discussing their most-recent works.
* Topics of interest include (but are not limited to): - Medical image analysis/indexing - Medical video analysis/indexing (e.g., endoscopic videos, microscopic medical videos, OR-videos) - Surgical Quality Assessment (e.g., error analysis through images or videos) - Image and video analysis from personal sensors (e.g., lifelogging cameras) for the purpose of health - Lifelog data analysis in general - Personal experiences of long-term health/wellness studies - Multimedia analysis and retrieval for multimodal interaction in the health domain - Multimodal conversation and dialogue systems for social companion agents - Speech and audio analysis and retrieval for health applications - Facial analysis and gesture recognition of patients - Fusion of multimedia information for health and care-giving applications - Semantic web approaches for multimedia health applications - Visual analytics for human machine interaction in the health domain
* Submission: Authors are invited to submit full length papers (6 pages in IEEE double-column format) via the EasyChair system of CBMI 2018. Each submission will be peer-reviewed by at least 3 PC members (single-blind). *** Important note: The title and the header of the paper should include the mention ?Submitted to Special Session on Analysis of Multimedia Data for Medicine and Health? (e.g., as sub-title) to avoid any misclassification.
* Important dates: Paper submission: May 18, 2018 Notification of acceptance: June 29, 2018 Camera-ready papers due: July 13, 2018
* Organizers: Sébastien Lefèvre, Université de Bretagne Sud, IRISA (sebastien.lefevre@irisa.fr) Josiane Mothe, Université de Toulouse, IRIT CNRS (Josiane.Mothe@irit.fr)
* Call for papers: The proliferation of Earth Observation satellites, together with their continuously increasing performances, provides today a massive amount of geospatial data. Analysis and exploration of such data leads to various applications, from agricultural monitoring to crisis management and global security. However, they also raise very challenging problems, e.g. dealing with extremely large and real time geospatial data, user-friendly querying and retrieval satellite images or mosaics, semantic indexing and annotation. The purpose of this special session is to address these challenges, and to allow researchers from multimedia retrieval and remote sensing to meet and share their experiences in order to build the remote sensing retrieval systems of tomorrow. This special session aims to establish connections between researchers from multimedia retrieval and issues raised in remote sensing, and to provide interesting problems to the former while providing solutions for the latter. On the one hand, geospatial data requires specific models of description, with characteristics very different from other domains. To name a few, remotely sensed images are not necessarily defined in usual color spaces, they compose large-scale mosaics enabling a continuous global cover of the earth, they can be analysed and understood at various scales, etc. On the other hand, the multimedia retrieval community propose many scalable algorithms for learning, searching, or classifying data in a more generalist way. This special session will be a very interesting opportunity for multimedia researchers to propose adaptations to geospatial data, and for remote sensing researchers to create new models compatible with retrieval algorithms, while offering a context where people from these two domains can meet and share their experiences. Earth Observation is one of the major resource of visual data that still greatly lacks of efficient and effective methods for indexing and retrieval. Major challenges are faced since the geospatial data available worldwide is at the order of magnitude of ZettaBytes. Besides, thanks to the efforts of NASA in the USA and Copernicus program in Europe, satellite images provided free of charge to end-users represent several new TB every day. To ease the design of new solutions, the scientific community benefits from the availability of an increasing number of public benchmarks, such as: UC Merced Land Use Dataset, Brazilian Coffee Scenes Dataset, SAT-4 and SAT-6 airborne datasets, Sentinel-2 EuroSAT dataset, ISPRS 2D and 3D Semantic Labeling benchmark, ImageClef 2017 Remote Pilot task, IEEE Data Fusion Contest, Kaggle contests, etc. This is expected to ensure fair comparison of methods and to support the evolution of the state-of-the-art.
* Topics of interest include (but are not limited to): - Content- and context-based indexing, search and retrieval of EO data - Semantic annotation - Deep Learning and CBIR of EO data - Search and browsing on EO repositories - Change detection and its applications, - Near real time monitoring, - Multimodal / multi-observations (sensors, dates, resolutions) analysis of EO data - HCI issues in EO retrieval and browsing - Evaluation of EO retrieval systems, benchmarks for EO indexing and retrieval tasks - High-performance, large-scale indexing algorithms for EO data - Data fusion - Summarization and visualization of very large satellite image datasets - Applications: deforestation detection, air pollution detection and prediction, climate change, monitoring of resources, from land cover to phenology, photosynthetic activity, etc. Submissions should be sent via easychair and follow the IEEE format (see CBMI call). Each submission will be peer reviewed by at least 3 PC members (general PC and special session PC). The title of the submission should include ?(SS on IR4EO)? to avoid misclassification.
Since the Remote Sensing journal as an open call for a special issue on these topics (http://www.mdpi.com/journal/remotesensing/special_issues/ir2s) with a deadline of 30th of September, the best papers from the special session will be encouraged to submit an extended journal version to this special issue. The selection of the papers will be eased by the fact that one of the special session organizers is also the leading guest editor of the special issue.
* Invited speaker: Begüm Demir is associate professor at the University of Trento (Italy). In 2017, she got an (ERC) Starting Grant with the project « BigEarth - Accurate and Scalable Processing of Big Data in Earth Observation ».
CHiME 2018 will bring together researchers from the fields of computational hearing, speech enhancement, acoustic modelling and machine learning to discuss the robustness of speech processing in everyday environments.
As a focus for discussion, the workshop will host the CHiME-5 Speech Separation and Recognition Challenge. To find out more about the challenge, see http://spandh.dcs.shef.ac.uk/chime_challenge/.
PAPER SUBMISSION
Relevant research topics include (but are not limited to): - training schemes: data augmentation, semi-supervised training, - speaker localization and beamforming, - single- or multi-microphone enhancement and separation, - robust features and feature transforms, - robust acoustic and language modeling, - robust speech recognition, - robust speaker and language recognition, - robust paralinguistics, - cross-environment or cross-dataset performance analysis, - environmental background noise modelling.
Papers reporting evaluation results on the CHiME-5 dataset or on other datasets are both welcome.
IMPORTANT DATES
3rd Aug, 2018 Extended abstract submission (2 pages) 20th Aug, 2018 Paper notification 7th Sept, 2018 CHiME-5 Workshop 8th Oct, 2018 Final paper (2 to 6 pages)
ORGANISERS
Jon Barker, University of Sheffield Shinji Watanabe, Johns Hopkins University Emmanuel Vincent, Inria
LOCAL ORGANISER
Simerpreet Kaur, Microsoft
SPONSORS
Microsoft
SUPPORTED BY
International Speech Communication Association (ISCA) ISCA Robust Speech Processing SIG
(2018-09-07) 5th Machine Learning in Speech and Language Processing Workshop (MLSLP-2018), Hydderabad, India
MLSLP-2018, Hyderabad, India, September 7, 2018
5th Machine Learning in Speech and Language Processing Workshop (MLSLP-2018)
Satellite workshop of Interspeech 2018, Hyderabad, India
https://sites.google.com/view/mlslp/home
== Important Dates:
- Abstracts due: 2 July 2018
- Notification of acceptance: 16 July 2018
- Final abstract/paper deadline: 30 July 2018
- Registration deadline: 30 July 2018 (Registration is free for all attendees!)
- Workshop date: 7 September 2018
== Topic:
MLSLP is a recurring workshop, often joint with machine learning or speech/natural language processing conferences. While research in speech and language processing has always involved machine learning, current research is benefiting from even closer interaction between these fields. Speech and language processing is continually mining new ideas from machine learning (ML) and ML, in turn, is devoting more interest to speech and language applications. This workshop aims to be a venue for identifying and incubating the next waves of research directions for interaction and collaboration. The workshop will not be yet another venue for applications of deep learning to speech and language processing, as this is already well covered by major conferences. It will, however, include new directions for deep learning in speech/language, as well as other emerging ideas. In general, the workshop will (1) discuss the emerging research ideas with potential for impact in speech/language and (2) bring together relevant researchers from ML and speech/language who may not regularly interact at conferences. MLSLP is a workshop of SIGML, the Special Interest Group on machine learning in speech and language processing of ISCA (the International Speech Communication Association).
== Call for Participation
Abstracts should be submitted electronically via the following submission site: https://easychair.org/conferences/?conf=mlslp2018
Abstracts are limited to 2 pages of text, plus one page (maximum) for references only. Please use the main Interspeech two-column format, with the 2-page limit. Submissions that exceed the page limit or do not conform to the guidelines will be rejected without review. Submissions must be submitted in PDF format.
Submitted abstracts may include new work and/or a summary of the authors' work that has been recently published or is under review in another conference or journal. In the interest of spurring discussion, we also encourage authors to submit work in progress with only preliminary results.
== Preliminary Program:
The workshop will include the following events:
* A series of talks by senior researchers in the field of speech and language processing
* A series of short talks by postdoctoral scholars/graduate students
* A poster session for workshop attendees to present their own research
* A lunch for all the attendees
== Organizing committee:
Preethi Jyothi (general chair) / Indian Institute of Technology Bombay
Rohit Prabhavalkar (general chair) / Google Inc.
Liang Lu (program chair) / Microsoft Inc.
Tara Sainath (program chair) / Google Inc.
== Scientific committee:
Yossi Adi / Bar-Ilan University
Ebru Arisoy / MEF University
Nancy Chen / Institute for Infocomm Research, A*STAR
Sriram Ganapathy / Indian Institute of Science
Mark Hasegawa-Johnson / UIUC
Yanzhang (Ryan) He / Google Inc.
Karen Livescu / TTI Chicago
Michael Mandel / Brooklyn College CUNY
Vimal Manohar / Johns Hopkins University
Petr Motlicek / Idiap Research Institute
Arun Narayanan / Google Inc.
Anton Ragni / University of Cambridge
Hao Tang / MIT
For further details, please visit: https://sites.google.com/view/mlslp/home or email at mlslp2018@gmail.com
FedCSIS an annual international multi-conference, this year organized jointly by the Polish Information Processing Society (PTI), Poland Section Computer Society Chapter, Systems Research Institute Polish Academy of Sciences, Wroclaw University of Economics, Warsaw University of Technology and Adam Mickiewicz University, in technical cooperation with: IEEE Region 8, IEEE Poland Section, IEEE Computer Society Technical Committee on Intelligent Informatics, IEEE Czechoslovakia Section Computer Society Chapter, IEEE Poland Section (Gdansk) Computer Society Chapter, IEEE SMC Technical Committee on Computational Collective Intelligence, IEEE Poland Section SMC Society Chapter, IEEE Poland Section Control System Society Chapter, IEEE Poland Section Computational Intelligence Society Chapter, ACM Special Interest Group on Applied Computing, International Federation for Information Processing, Committee of Computer Science of Polish Academy of Sciences, Polish Operational and Systems Research Society, Eastern Cluster ICT Poland, Mazovia Cluster ICT.
Please feel free to forward this announcement to your colleagues and associates who could be interested in it.
The mission of the FedCSIS Conference Series is to provide a highly acclaimed multi-conference forum in computer science and information systems. The forum invites researchers from around the world to contribute their research results and participate in Events focused on their scientific and professional interests in computer science and information systems.
Since 2012, Proceedings of the FedCSIS conference are indexed in the Web of Science, SCOPUS and other indexing services. This includes already Proceedings of FedCSIS 2017.
FedCSIS EVENTS
The FedCSIS 2018 consists of the following Events, grouped into five conference areas.
* AAIA'18 - 13th International Symposium Advances in Artificial Intelligence and Applications --- AIMaViG'18 - 3rd International Workshop on Artificial Intelligence in Machine Vision and Graphics --- AIMA'18 - 8th International Workshop on Artificial Intelligence in Medical Applications --- AIRIM'18 - 3rd International Workshop on AI aspects of Reasoning, Information, and Memory --- ASIR'18 - 8th International Workshop on Advances in Semantic Information Retrieval --- DMGATE'18 - 1st International Workshop on AI Methods in Data Mining Challenges --- SEN-MAS'18 - 6th International Workshop on Smart Energy Networks & Multi-Agent Systems --- WCO'18 - 11th International Workshop on Computational Optimization * CSS - Computer Science & Systems --- 4A'18 - 1st Workshop on Actors, Agents, Assistants, Avatars --- AIPC'18 - 2nd International Workshop on Advances in Image Processing and Colorization --- BEDA'18 - 1st International Workshop on Biomedical & Health Engineering and Data Analysis --- BigDAISy'18 - 1st Workshop on Big Data Analytics for Information Security --- CANA'18 - 11th Workshop on Computer Aspects of Numerical Algorithms --- C&SS'18 - 5th International Conference on Cryptography and Security Systems --- CPORA'18 - 3rd Workshop on Constraint Programming and Operation Research Applications --- DaSCA'18 - 1st International Symposium on Big Data in Cloud and Services Computing Applications --- LTA'18 - 3rd International Workshop on Language Technologies and Applications --- MMAP'18 - 11th International Symposium on Multimedia Applications and Processing --- WSC'18 - 10th Workshop on Scalable Computing * iNetSApp - International Conference on Innovative Network Systems and Applications --- CAP-NGNCS'18 - 1st International Workshop on Communications Architectures and Protocols for the New Generation of Networks and Computing Systems --- INSERT'18 - 2nd International Conference on Security, Privacy, and Trust --- IoT-ECAW'18 - 2nd Workshop on Internet of Things - Enablers, Challenges and Applications --- WSN'18 - 7th International Conference on Wireless Sensor Networks * IT4MBS - Information Technology for Management, Business & Society --- AITM'18 - 15th Conference on Advanced Information Technologies for Management --- AITSD'18 - 1st International Workshop on Applied Information Technologies for Sustainable Development --- ISM'18 - 13th Conference on Information Systems Management --- IT4L'18 - 6th Workshop on Information Technologies for Logistics --- KAM'18 - 24rd Conference on Knowledge Acquisition and Management --- TEMHE'18 - 1st Workshop on Technology Enhanced Medical and Healthcare Education * SSD&A - Software Systems Development & Applications --- MDASD'18 - 5th Workshop on Model Driven Approaches in System Development --- MIDI'18- 6th Conference on Multimedia, Interaction, Design and Innovation --- LASD'18 - 2nd International Conference on Lean and Agile Software Development --- SEW-38 & IWCPS-5 - Joint 38th IEEE Software Engineering Workshop (SEW-38) and 5th International Workshop on Cyber-Physical Systems (IWCPS-5) * DS-RAIT'18 - 5th Doctoral Symposium on Recent Advances in Information Technology
KEYNOTE SPEAKERS
- Mehmet Aksit, Chair Software Engineering, Formal Methods and Tools Group, Department of Computer Science, University of Twente - Jan Bosch, Director of the Software Center, Professor at Chalmers University Technology, Gothenburg, Sweden - W?odzis?aw Duch, Professor at Department of Informatics, and NeuroCognitive Laboratory, Center for Modern Interdisciplinary Technologies, Nicolaus Copernicus University - Rory V. O'Connor, Professor at Dublin City University, Ireland, Head of Delegation (for Ireland) to ISO/IEC JTC1/SC7
PAPER SUBMISSION AND PUBLICATION
Papers should be submitted by May 15, 2018 (strict deadline, no extensions). Preprints will be published on a USB memory stick provided to the FedCSIS participants. Only papers presented during the conference will be submitted to the IEEE for inclusion in the Xplore Digital Library. Furthermore, proceedings, published in a volume with ISBN, ISSN and DOI numbers, will posted within the conference Web portal. Moreover, most Events' organizers arrange quality journals, edited volumes, etc., and may invite selected extended and revised papers for post-conference publications (information can be found at the websites of individual events, or by contacting Chairs of said events).
IMPORTANT DATES
? Paper submission (strict deadline): May 15 2018 23:59:59 pm HST (there will be no extension) ? Position paper submission: June 12, 2018 ? Authors notification: June 24, 2018 ? Final paper submission and registration: July 03, 2018 ? Final deadline for discounted fee: August 01, 2018 ? Conference dates: September 9-12, 2018
CHAIRS OF FedCSIS CONFERENCE SERIES
Maria Ganzha, Leszek A. Maciaszek, Marcin Paprzycki
(2018-09-10) CLEF 2018 Conference and Labs on the Evaluation Forum, Avignon, France (Updated)
CLEF 2018 Conference and Labs on the Evaluation Forum Information Access Evaluation meets Multilinguality, Multimodality and Visualization 10 - 14 September 2018, Avignon - France
Important Dates --------------- ? Title, authors and abstract upload: 7 May 2018 ? Submission of Long and Short Papers: 14 May 2018 ? Notification of Acceptance: 8 June 2018 ? Camera Ready Copy due: 22 June 2018 ? Conference: 10-14 September 2018
CLEF 2018 is the 19th edition of CLEF which, since 2000, contributes to the systematic evaluation of information access systems. It consists of a peer-reviewed conference (see the separate call for papers) and a set of ten Labs designed to test different aspects of multilingual and multimedia IR systems: 1. CENTRE@CLEF 2018, CLEF/NTCIR/TREC Reproducibility 2. CheckThat! Automatic Identification and Verification of Political Claims 3. CLEF eHealth 4. DynSe, Dynamic Search for Complex Tasks 5. eRISK, Early Risk Prediction on the Internet 6. ImageCLEF, Multimedia Retrieval in CLEF 7. LifeCLEF 8. MC2, Multilingual Cultural Mining and Retrieval 9. PAN, Lab on Digital Text Forensics 10. PIR-CLEF, Evaluation of Personalised Information Retrieval
***************** Organizers ***************** Conference Chairs Patrice Bellot, Aix-Marseille Université - CNRS LSIS, France Chiraz Trabelsi, University of Tunis El Manar, T unis
Program Chairs Josiane Mothe, SIG, IRIT, France) Fionn Murtagh, University of Huddersfield, UK)
Lab Chairs Jian Yun Nie, DIRO, Université de Montréal, Canada Laure Soulier, LIP6, UPMC, France
Proceedings Chairs Linda Cappellato, University of Padua, Italy Nicola Ferro, University of Padua, Italy
CENTRE@CLEF 2018 - CLEF/NTCIR/TREC Reproducibility The goal of CENTRE@CLEF 2018 is to run a joint CLEF/NTCIR/TREC task on challenging participants: 1) to reproduce best results of best/most interesting systems in previous editions of CLEF/NTCIR/TREC by using standard open source IR systems; 2) to contribute back to the community the additional components and resources developed to reproduce the results in order to improve existing open source systems. - Task 1 - Replicability: replicability of selected methods on the same experimental collections. - Task 2 - Reproducibility: reproducibility of selected methods on the different experimental collections. Task 3 - Re-reproducibility: using the components developed in T1 and T2 and made available by the other participants to replicate/reproduce their results. Lab Coordination: Nicola Ferro (University of Padua), Tetsuya Sakai (Waseda University), Ian Soboroff (NIST) Lab website:http://www.centre-eval.org/ Twitter: @_centre_
LifeCLEF LifeCLEF lab aims at boosting research on the identification of living organisms and on the production of biodiversity data. Through its biodiversity informatics related challenges, LifeCLEF is intended to push the boundaries of the state-of-the-art in several research directions at the frontier of multimedia information retrieval, machine learning and knowledge engineering. The lab is organized around three tasks: - Task 1 - GeoLifeCLEF: location-based species recommendation. - Task 2 - BirdCLEF: bird species identification from bird calls and songs. - Task 3 - ExpertLifeCLEF: experts vs. machines identification quality. Lab Coordination: Alexis Joly (INRIA, LIRMM), Henning Müller (HES-SO), Pierre Bonnet (CIRAD, AMAP), Hervé Goëau (CIRAD, AMAP), Hervé Glotin (University of Toulon, LSIS CNRS), Simone Palazzo (University of Catania), Willem-Pier Vellinga (Xeno-Canto) Lab website:http://lifeclef.org/
PAN - Lab on Digital Text Forensics PAN is a series of scientific events and shared tasks on digital text forensics. - Task 1 - Author Identification: cross-domain authorship attribution. More specifically, cases where the topic of texts varies significantly will be examined. In addition, we will continue the pilot task of style change detection, focusing on finding switches of authors within documents based on an intrinsic style analysis. - Task 2 - Author Obfuscation: while the goal of author identification and author profiling is to model author style so as to deanomyize authors, the goal of author obfuscation technology is to prevent that by disguising the authors. We will study author masking vs. authorship verification. - Task 3 - Author Profiling: the goal is to identify an author's traits based on their writing style. The focus will be on age and gender, whereas text and image will be used as information sources, offering tweets in English, Spanish and Arabic. Lab Coordination: Martin Potthast (Leipzig University), Paolo Rosso (Universitat Politècnica de València), Efstathios Stamatatos (Univerisity of the Aegean), Benno Stein (Bauhaus-Universität Weimar) Lab website:http://pan.webis.de/
CLEF eHealth Medical content is available electronically in a variety of forms ranging from patient records and medical dossiers, scientific publications and health-related websites to medical-related topics shared across social networks. This lab aims to support the development of techniques to aid laypeople, clinicians and policy-makers in easily retrieving and making sense of medical content to support their decision making. - Task 1 - Multilingual Information Extraction: Participants will be required to extract the causes of death from death certificates, authored by physicians in European languages. This can be seen as a named entity recognition, normalization, and/or text classification task. - Task 2 - Technologically Assisted Reviews in Empirical Medicine: Participants will be challenged to retrieve medical studies relevant to conducting a systematic review on a given topic. This can be seen as a total recall problem and is addressed by both query generation and document ranking. - Task 3 - Patient-centred Information Retrieval: Participants must retrieve web pages that fulfil a given patient?s personalised information need. This needs to fulfil the following criteria: information reliability, quality, and suitability. The task also has a multilingual querying track. Lab Coordination: Leif Azzopardi (Univ. of Strathclyde), Lorraine Goeuriot (Univ. J.Fourier), Evangelos Kanoulas (Univ. of Amsterdam), Liadh Kelly (Maynooth University), Aurélie Névéol (CNRS-LIMSI), Joao Palotti (Vienna Univ.), Aude Robert (INSERM/CepiDC), Rene Spijker (Cochrane), Hanna Suominen (Australian National Univ.), Guido Zuccon (Queensland Univ. of Technology) Lab Website:https://sites.google.com/view/clef-ehealth-2018/home Twitter : @clefehealth
MC2 - Multilingual Cultural Mining and Retrieval Developing processing methods and resources to mine the social media sphere surrounding cultural events such as festivals. This requires to deal with almost all languages and dialects as well as informal expressions.There are three tasks: - Task 1 - Cross Language Cultural Retrieval over MicroBlogs: a) Small Microblogs Multilingual Information Retrieval in Arabic, English, French and Latin languages; b) Microblogs Bilingual Information Retrieval for tuning systems running on language pairs; c) Microblog Monolingual Information Retrieval based on 2017 language identification. - Task 2 - Mining Opinion Argumentation: a) Polarity detection in microblogs; b) Automatic identification of argumentation elements over Microblogs and WikiPedia; c) Classification and summarization of arguments in texts. - Task 3 - Dialectal Focus Retrieval: a) Arabic dialects in Blogs, MicroBlogs and Video News transcriptions; b) Spanish language variations in Blogs, MicroBlogs and Journals. Lab Coordination: Chiraz Latiri (University Tunis El Manar), Eric SanJuan (LIA, Avignon University), Catherine Berrut (LIG, Grenoble Alpes University), Lorraine Goeuriot (LIG, Grenoble Alpes University), Julio Gonzalo (UNED) Lab website:https://mc2.talne.eu/ Twitter: @talne_mc2
ImageCLEF - Multimedia Retrieval in CLEF The lab provides an evaluation forum for the language independent annotation and retrieval of images, a domain for which tools are by far not as advanced as for text analysis and retrieval. - Task 1 - ImageCLEFlifelog: An increasingly wide range of personal devices, such as smartphones, video cameras as well as wearable devices that allow capturing pictures, videos, and audio clips in every moment of our life are becoming available. The task addresses the problems of lifelogging data understanding, summarization and retrieval. - Task 2 - ImageCLEFcaption: Interpreting and summarizing the insights gained from medical images such as radiology output is a time-consuming task that involves highly trained experts and often represents a bottleneck in clinical diagnosis pipelines. The task addresses the problem of bio-medical image concept detection and caption prediction from large amounts of training data. - Task 3 - ImageCLEFtuberculosis: The objective of this task is to determine tuberculosis subtypes and drug resistances, as far as possible automatically, from the volumetric image information in computed tomography (CT) volumes (mainly texture analysis) and based on clinical information (e.g., age, gender, etc). - Task 4 - VisualQuestionAnswering: With the ongoing drive for improved patient engagement and access to the electronic medical records via patient portals, patients can now review structured and unstructured data from labs and images to text reports associated with their healthcare utilization. Given a medical image accompanied with a set of clinically relevant questions, participating systems are tasked with answering the questions based on the visual image content. Lab Coordination: Bogdan Ionescu (University Politehnica of Bucharest), Mauricio Villegas (SearchInk), Henning Müller (HES-SO) Lab website:http://www.imageclef.org/2018/ Twitter: @imageclef
PIR-CLEF - Evaluation of Personalised Information Retrieval The primary aim of the PIR-CLEF 2018 laboratory is: 1) to facilitate comparative evaluation of PIR by offering participating research groups a mechanism for evaluation of their personalisation algorithms; 2) to give the participating groups the means to formally define and evaluate their own and novel user profiling approaches for PIR. - Task 1 - Personalized Search: we will provide a bag-of-words profile gathered during the query sessions performed by real searchers, the set of queries formulated by each user, together with the corresponding document relevance, and the the search logs of each user. Task participants will be expected to compute search results obtained by applying their personalization algorithms on these queries. The search will be carried out on the ClueWeb12 collection, by using the API provided by DCU. - Task 2 - User Profile Models: participants will be required to develop their own user profile models using the information gathered about the real user during her interactions with the system. The same information have been used for creating the baseline (keyword-based user profiles), which is provided in the benchmark. Lab Coordination: Gabriella Pasi (University of Milano Bicocca), Gareth J. F. Jones (Dublin City University), Stefania Marrara (Consorzio C2T), Debasis Ganguly (IBM Research Dublin) , Procheta Sen (Dublin City University), Camilla Sanvitto (University of Milano Bicocca) Lab website:http://www.ir.disco.unimib.it/pir-clef2018/ Twitter: @clef2018_pir
eRISK - Early Risk Prediction on the Internet eRisk explores the evaluation methodology, effectiveness metrics and practical applications (particularly those related to health and safety) of early risk detection on the Internet. - Task 1 - Early Detection of Signs of Depression: the challenge consists of sequentially processing pieces of evidence (Social Media entries) and detect early traces of depression as soon as possible. - Task 2 - Early Detection of Signs of Anorexia: the challenge consists of sequentially processing pieces of evidence (Social Media entries) and detect early traces of anorexia as soon as possible. Both tasks are mainly concerned about evaluating Text Mining solutions and, thus, we concentrate on texts written in Social Media. Texts should be processed in the order they were posted. In this way, systems that effectively perform this task could be applied to sequentially monitor user interactions in blogs, social networks, or other types of online media. Lab Coordination: David E. Losada (University of Santiago de Compostela), Fabio Crestani (University of Lugano), Javier Parapar (University of A Coruña) Lab website: http://early.irlab.org/ Twitter: @earlyrisk
DynSe - Dynamic Search for Complex Tasks The primary aim of the CLEF Dynamic Search Lab is to develop algorithms which interact dynamically with user (or other algorithms) towards solving a task, and evaluation methodologies to quantify their effectiveness. The lab is organized along two tasks: - Task 1 - Query Suggestion: given a verbose topic description participants will generate and submit a sequence of queries and a ranking of the collection for each query. Queries will be evaluated over their effectiveness (query agent) and/or resemblance to user queries (user simulation). Query suggestion will be performed iteratively. - Task 2 - Result Composition: Given the obtained results from the aforementioned queries obtain a single ranked list by merging the individual rankings. Lab Coordination: Evangelos Kanoulas (University of Amsterdam), Leif Azzopardi (University of Strathclyde) Lab website: https://ekanou.github.io/dynamicsearch/ Twitter: @clef_dynamic
CheckThat! - Automatic Identification and Verification of Political Claims CheckThat! aims to foster the development of technology capable of both spotting and verifying check-worthy claims in political debates in English and Arabic. - Task 1 - Check-Worthiness: Given a political debate, which is segmented into sentences with speakers annotated, identify which statements (claims) should be prioritized for fact-checking. This will be a ranking problem, and systems will be asked to produce a score, according to which the ranking will be performed. - Task 2 - Factuality: Given a list of already-extracted claims, classify them with factuality labels (e.g., true, half-true, false). This task will be run in an open mode. We will not provide any pre-selected set of documents to support the veracity labels. Participants will be free to use whatever resources they have and the Web in general, with the exception of the websites used by the organizers to collect the data. Lab Coordination: Preslav Nakov, Lluís Màrquez, Alberto Barrón-Cedeño (Qatar Computing Research Institute), Wajdi Zaghouani (Carnegie Mellon University Qatar), Tamer Elsayed, Reem Suwaileh (Qatar University), Pepa Gencheva (Sofia University) Lab website:http://alt.qcri.org/clef2018-factcheck/ Twitter: @_checkthat_
(2018-09-11) 21st International Conference on TEXT, SPEECH and DIALOGUE (TSD 2018), Brno, Czech Republic
TSD 2018 - SECOND CALL FOR PAPERS *********************************************************
Twenty-first International Conference on TEXT, SPEECH and DIALOGUE (TSD 2018) Brno, Czech Republic, 11-14 September 2018 http://www.tsdconference.org/
The conference is organized by the Faculty of Informatics, Masaryk University, Brno, and the Faculty of Applied Sciences, University of West Bohemia, Pilsen. The conference is supported by International Speech Communication Association.
Venue: Brno, Czech Republic
THE MAIN SUBMISSION DEADLINE:
March 22 2018 ............ Submission of full papers
Submission of abstract serves for better organization of the review process only - for the actual review a full paper submission is necessary.
KEYNOTE SPEAKERS
Kenneth Church, Baidu, USA Piek Vossen, Vrije Universiteit Amsterdam, The Netherlands
TSD SERIES
TSD series evolved as a prime forum for interaction between researchers in both spoken and written language processing from all over the world. Proceedings of TSD form a book published by Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI) series. TSD Proceedings are regularly indexed in Web of Science by Thomson Reuters and in Scopus. Moreover, LNAI series are listed in all major citation databases such as DBLP, EI, INSPEC or COMPENDEX.
CALL for SATELLITE WORKSHOP PROPOSALS
The TSD 2018 conference will be accompanied by one-day satellite workshops or project meetings with organizational support by the TSD organizing committee. The organizing committee can arrange for a meeting room at the conference venue and prepare a workshop proceedings as a book with ISBN by a local publisher. The workshop papers that will pass also the standard TSD review process will appear in the Springer proceedings. Each workshop is a subject to proposal that should be sent to the contact e-mail tsd2018@tsdconference.org ahead of the respective deadline.
TOPICS
Topics of the conference will include (but are not limited to):
Corpora and Language Resources (monolingual, multilingual, text and spoken corpora, large web corpora, disambiguation, specialized lexicons, dictionaries)
Speech Recognition (multilingual, continuous, emotional speech, handicapped speaker, out-of-vocabulary words, alternative way of feature extraction, new models for acoustic and language modelling)
Tagging, Classification and Parsing of Text and Speech (morphological and syntactic analysis, synthesis and disambiguation, multilingual processing, sentiment analysis, credibility analysis, automatic text labeling, summarization, authorship attribution)
Speech and Spoken Language Generation (multilingual, high fidelity speech synthesis, computer singing)
Semantic Processing of Text and Speech (information extraction, information retrieval, data mining, semantic web, knowledge representation, inference, ontologies, sense disambiguation, plagiarism detection)
Integrating Applications of Text and Speech Processing (machine translation, natural language understanding, question-answering strategies, assistive technologies)
Automatic Dialogue Systems (self-learning, multilingual, question-answering systems, dialogue strategies, prosody in dialogues)
Multimodal Techniques and Modelling (video processing, facial animation, visual speech synthesis, user modelling, emotions and personality modelling)
Papers on processing of languages other than English are strongly encouraged.
PROGRAM COMMITTEE
Elmar Noeth, Germany (general chair) Rodrigo Agerri, Spain Eneko Agirre, Spain Vladimir Benko, Slovakia Paul Cook, Australia Jan Cernocky, Czech Republic Simon Dobrisek, Slovenia Kamil Ekstein, Czech Republic Karina Evgrafova, Russia Yevhen Fedorov, Ukraine Volker Fischer, Germany Darja Fiser, Slovenia Eleni Galiotou, Greece Björn Gambäck, Norway Radovan Garabik, Slovakia Alexander Gelbukh, Mexico Louise Guthrie, USA Tino Haderlein, Germany Jan Hajic, Czech Republic Eva Hajicova, Czech Republic Yannis Haralambous, France Hynek Hermansky, USA Jaroslava Hlavacova, Czech Republic Ales Horak, Czech Republic Eduard Hovy, USA Maria Khokhlova, Russia Aidar Khusainov, Russia Daniil Kocharov, Russia Miloslav Konopik, Czech Republic Ivan Kopecek, Czech Republic Valia Kordoni, Germany Evgeny Kotelnikov, Russia Pavel Kral, Czech Republic Siegfried Kunzmann, Germany Nikola Ljubešić, Croatia Natalija Loukachevitch, Russia Bernardo Magnini, Italy Oleksandr Marchenko, Ukraine Vaclav Matousek, Czech Republic France Mihelic, Slovenia Roman Moucek, Czech Republic Agnieszka Mykowiecka, Poland Hermann Ney, Germany Karel Oliva, Czech Republic Juan Rafael Orozco-Arroyave, Columbia Karel Pala, Czech Republic Nikola Pavesic, Slovenia Maciej Piasecki, Poland Josef Psutka, Czech Republic James Pustejovsky, USA German Rigau, Spain Marko Robnik Šikonja, Slovenia Leon Rothkrantz, The Netherlands Anna Rumshisky, USA Milan Rusko, Slovakia Pavel Rychly, Czech Republic Mykola Sazhok, Ukraine Pavel Skrelin, Russia Pavel Smrz, Czech Republic Petr Sojka, Czech Republic Stefan Steidl, Germany Georg Stemmer, Germany Vitomir Štruc, Slovenia Marko Tadic, Croatia Tamas Varadi, Hungary Zygmunt Vetulani, Poland Aleksander Wawer, Poland Pascal Wiggers, The Netherlands Yorick Wilks, United Kingdom Marcin Wolinski, Poland Alina Wróblewska, Poland Victor Zakharov, Russia Jerneja Žganec Gros, Slovenia
FORMAT OF THE CONFERENCE
The conference program will include presentation of invited papers, oral presentations, and poster/demonstration sessions. Papers will be presented in plenary or topic oriented sessions.
The Best Paper and Best Student Paper Awards will be selected by the Programme Committee and supported with a total prize of EUR 1000 from Springer.
Social events including a trip in the vicinity of Brno will allow for additional informal interactions.
The registration fee is the same as in 2016:
Student: Early payment (by May 31) - 10,000 CZK (approx. EUR 395) Full participant: Early payment (by May 31) - 12,000 CZK (approx. EUR 475)
The fee has a 'all in one' form, to keep equality between participants.
SUBMISSION OF PAPERS
Authors are invited to submit a full paper not exceeding 8 pages formatted in the LNCS style (see below). Those accepted will be presented either orally or as posters. The decision about the presentation format will be based on the recommendation of the reviewers. The authors are asked to submit their papers using the on-line form accessible from the conference website.
Papers submitted to TSD 2018 must not be under review by any other conference or publication during the TSD review cycle, and must not be previously published or accepted for publication elsewhere.
As reviewing will be blind, the paper should not include the authors' names and affiliations. Furthermore, self-references that reveal the author's identity, e.g., 'We previously showed (Smith, 1991) ...', should be avoided. Instead, use citations such as 'Smith previously showed (Smith, 1991) ...'. Papers that do not conform to the requirements above are subject to be rejected without review.
The authors are strongly encouraged to write their papers in TeX or LaTeX formats. These formats are necessary for the final versions of the papers that will be published in the Springer Lecture Notes. Authors using a WORD compatible software for the final version must use the LNCS template for WORD and within the submit process ask the Proceedings Editors to convert the paper to LaTeX format. For this service a service-and-license fee of CZK 2000 will be levied automatically.
The paper format for review has to be in the PDF format with all required fonts included. Upon notification of acceptance, presenters will receive further information on submitting their camera-ready and electronic sources (for detailed instructions on the final paper format see http://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines, Sample File typeinst.zip).
Authors are also invited to present actual projects, developed software or interesting material relevant to the topics of the conference. The presenters of demonstrations should provide an abstract not exceeding one page. The demonstration abstracts will not appear in the conference proceedings.
IMPORTANT DATES
March 15 2018 ............ Submission of abstracts March 22 2018 ............ Submission of full papers May 16 2018 .............. Notification of acceptance May 31 2018 .............. Final papers (camera ready) and registration August 8 2018 ............ Submission of demonstration abstracts August 15 2018 ........... Notification of acceptance for demonstrations sent to the authors September 11-14 2018 ..... Conference date
Submission of abstracts serves for better organization of the review process only - for the actual review a full paper submission is necessary.
The accepted conference contributions will be published in Springer proceedings that will be made available to participants at the time of the conference.
OFFICIAL LANGUAGE
The official language of the conference is English.
ACCOMMODATION
The organizing committee will arrange discounts on accommodation in the 4-star hotel at the conference venue. The current prices of the accommodation are available at the conference website.
ADDRESS
All correspondence regarding the conference should be addressed to Ales Horak, TSD 2018 Faculty of Informatics, Masaryk University Botanicka 68a, 602 00 Brno, Czech Republic phone: +420-5-49 49 18 63 fax: +420-5-49 49 18 20 email: tsd2018@tsdconference.org
Brno is the second largest city in the Czech Republic with a population of almost 400.000 and is the country's judiciary and trade-fair center. Brno is the capital of South Moravia, which is located in the south-east part of the Czech Republic and is known for a wide range of cultural, natural, and technical sights. South Moravia is a traditional wine region. Brno had been a Royal City since 1347 and with its six universities it forms a cultural center of the region.
Brno can be reached easily by direct flights from London and Munich, and by trains or buses from Prague (200 km) or Vienna (130 km).
For the participants with some extra time, nearby places may also be of interest. Local ones include: Brno Castle now called Spilberk, Veveri Castle, the Old and New City Halls, the Augustine Monastery with St. Thomas Church and crypt of Moravian Margraves, Church of St. James, Cathedral of St. Peter & Paul, Cartesian Monastery in Kralovo Pole, the famous Villa Tugendhat designed by Mies van der Rohe along with other important buildings of between-war Czech architecture.
For those willing to venture out of Brno, Moravian Karst with Macocha Chasm and Punkva caves, battlefield of the Battle of three emperors (Napoleon, Russian Alexander and Austrian Franz - Battle by Austerlitz), Chateau of Slavkov (Austerlitz), Pernstejn Castle, Buchlov Castle, Lednice Chateau, Buchlovice Chateau, Letovice Chateau, Mikulov with one of the largest Jewish cemeteries in Central Europe, Telc - a town on the UNESCO heritage list, and many others are all within easy reach.
DISRUPTIVE RESEARCH IN ANTI-SPOOFING FOR AUTOMATIC SPEAKER VERIFICATION
Research in anti-spoofing for automatic speaker verification has advanced significantly in the last five years. While proposed countermeasures are effective in detecting and deflecting spoofing attacks, current solutions lack a solid grounding in the processes involved in the mounting of spoofing attacks. As a result, and with most current solutions relying on the somewhat blind use of relatively standard features and classifiers, many countermeasures fail
when they encounter different forms of attack and are unlikely to generalise well to attacks encountered in the wild. This special session, organised as part of MLSP 2018, seeks to break the mould in anti-spoofing research. We invite scientific contributions that explore fundamentally disruptive approaches to anti-spoofing for automatic speaker verification. While contributions which use existing standard/common databases are welcome, their use is not required. Preference will instead be given to contributions that explore under-researched aspects of spoofing and non-standard, emerging or blue-sky countermeasure technologies, especially those with an emphasis on previously-unexplored signal processing and machine learning approaches which either shed new light on spoofing or expose promising new research directions for future exploration. Both technological and methodological contributions
are welcome.
Example topics include but are by no means limited to the following:
- theoretical bounds of spoofing attack detectability
- cross-domain feature learning for robust spoofing attack detection
- generative adversarial networks and threats to biometric technology
- one-class, semi-supervised, or reinforcement learning approaches to spoofing countermeasures
- new regularisation and optimisation methods to improve cross-dataset generality
- generation and detection of inaudible, imperceptible or other novel spoofing attacks
- novel hardware/sensor and knowledge-based spoofing countermeasures
(2018-09-17) IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING, Aalborg, Denmark
MLSP2018 IEEE International Workshop on MACHINE LEARNING FOR SIGNAL PROCESSING September 17-20, 2018 Aalborg, Denmark MLSP2018.CONWIZ.DK CALL FOR PAPERS The 28th MLSP workshop in the series of workshops organized by the IEEE Signal Processing Society MLSP Technical Committee will present the most recent and exciting advances in machine learning for signal processing through keynote talks, tutorials, as well as special and regular single-track sessions. Prospective authors are invited to submit papers on relevant algorithms and applications including, but not limited to: - Learning theory and modeling - Neural networks and deep learning - Bayesian Learning and modeling - Sequential learning; sequential decision methods - Information-theoretic learning - Graphical and kernel models - Bounds on performance - Source separation and independent component analysis - Signal detection, pattern recognition and classification - Tensor and structured matrix methods - Machine learning for big data - Large scale learning - Dictionary learning, subspace and manifold learning - Semi-supervised and unsupervised learning - Active and reinforcement learning - Learning from multimodal data - Resource efficient machine learning - Cognitive information processing - Bioinformatics applications - Biomedical applications and neural engineering - Speech and audio processing applications - Image and video processing applications - Intelligent multimedia and web processing - Communications applications - Other applications including social networks, games, smart grid, security and privacy DATA ANALYSIS AND SIGNAL PROCESSING COMPETITION MLSP2018 seeks proposals for Data Analysis and Signal Processing Competition. The goal of competition is to advance the current state-of-the-art in theoretical and practical aspects of signal processing domains. SPECIAL SESSIONS Special Sessions will be included to address research in emerging or interdisciplinary areas of particular interest, not covered already by traditional MLSP sessions. BEST STUDENT PAPER AWARD The MLSP Best Student Paper Award will be granted to the best paper for which a student is the principal author and presenter. NETWORKING MLSP Networking will be organized as a new initiative to focus on stimulating collaboration among participants to solve grand societal challenges using machine learning and signal processing. PAPER SUBMISSION Prospective authors are invited to submit a double column paper of up to six pages using the electronic submission procedure at http://mlsp2018.conwiz.dk. Accepted papers will be published on a password-protected website that will be available during the workshop. The presented papers will be published in and indexed by IEEE Xplore. IMPORTANT DATES AND DEADLINES: Paper submission deadline May 1, 2018 Paper update deadline May 4, 2018 Review notification June 18, 2018 Rebuttal period June 18-24, 2018 Reviewer discussion period June 25-30, 2018 Decision notification July 6, 2018 Camera-ready papers and Author advance registration July 31, 2018 ORGANIZING COMMITTEE: General Chair: Zheng-Hua Tan, Aalborg University, Denmark Program Chairs: Nelly Pustelnik, ENS Lyon, France Zhanyu Ma, Beijing University of Posts and Telecommunications, China Finance Chair: Børge Lindberg, Aalborg University, Denmark Data Competition Chairs: Karim Seghouane, University of Melbourne, Australia Yuejie Chi, Ohio State University, USA Publicity and Social Media Chairs: Marc Van Hulle, KU Leuven, Belgium Jen-Tzung Chien, National Chiao Tung University, Taiwan Web and Publication Chair: Jan Larsen, Technical University of Denmark, Denmark Advisory Committee: Søren Holdt Jensen, Aalborg University, Denmark Theodoridis Sergios, University of Athens, Greece Raviv Raich, Oregon State University, USA Vince Calhoun, University of New Mexico, USA
(2018-09-18) 20th International Conference on Speech and Computer (SPECOM), Leipzig, Germany (updated)
SPECOM-2018 - CALL FOR PAPERS *********************************************************
20th International Conference on Speech and Computer (SPECOM-2018) Venue: Leipzig, Germany, September 18-22, 2018 Web: www.specom2018.org
ORGANIZERS The conference is organized by Leipzig University of Telecommunications (HfTL, Leipzig, Germany) in cooperation with St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia) and Moscow State Linguistic University (MSLU, Moscow, Russia).
SPECOM-2018 CO-CHAIRS Oliver Jokisch, Leipzig University of Telecommunications, Germany Alexey Karpov, SPIIRAS, St. Petersburg, Russia Rodmonga Potapova, MSLU, Moscow, Russia
CONFERENCE TOPICS The SPECOM conference is devoted to issues of speech technology, human-machine interaction, machine learning and signal processing, particularly: Affective computing Applications for human-computer interaction Audio-visual speech processing Automatic language identification Corpus linguistics and linguistic processing Forensic speech investigations and security systems Multichannel signal processing Multimedia processing Multimodal analysis and synthesis Signal processing and feature extraction Speaker identification and diarization Speaker verification systems Speech and language resources Speech analytics and audio mining Speech dereverberation Speech driving systems in robotics Speech enhancement Speech perception and speech disorders Speech recognition and understanding Speech translation automatic systems Spoken dialogue systems Spoken language processing Text-to-speech and Speech-to-text systems Virtual and augmented reality
SPECIAL SESSIONS Positioning and Power Relations in Conversations: www.specom2018.org/satellites/session1 Advanced Cognitive Models for Human-Machine and Human-Robot Interaction: www.specom2018.org/satellites/session2 Big Data in Speech Computation: www.specom2018.org/satellites/session3
SATELLITE EVENT 3rd International Conference on Interactive Collaborative Robotics ICR-2018: http://specom.nw.ru/icr2018
INVITED SPEAKERS Tanja Schultz - Advances in Biosignal-Based Spoken Communication Sebastian Moller - Quality Engineering of Speech and Language Services Dongheui Lee - Robot learning through Physical Interaction and Human Guidance www.specom2018.org/invited-speakers
OFFICIAL LANGUAGE The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.
FORMAT OF THE CONFERENCE The conference program will include presentation of invited papers, oral presentations, and poster/demonstration sessions.
SUBMISSION OF PAPERS Authors are invited to submit a full paper not exceeding 10 pages formatted in the LNCS style. Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of several independent reviewers. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2018 Papers submitted to SPECOM-2018 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.
PROCEEDINGS SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major citation databases such as Web of Science, Scopus, DBLP, etc. SPECOM Proceedings are included in the list of forthcoming proceedings for September 2018.
IMPORTANT DATES April 15, 2018 ............ Submission of full papers May 30, 2018 ............ Notification of acceptance June 15, 2018 ............ Final papers (camera ready) and early registration Sept. 18-22, 2018 ......... Conference dates
VENUE The conference will be organized at the at the Leipzig University of Telecommunications.
CONTACTS All correspondence regarding the conference should be addressed to: SPECOM-2018 Secretariat: E-mails: specom@iias.spb.su; jokisch@hft-leipzig.de SPECOM-2018 web-site: http://www.specom2018.org; http://specom.nw.ru
(2018-09-27) LAUGHTER WORKSHOP 2018, Sorbonne, Paris, France
LAUGHTER WORKSHOP 2018
Following the previous workshops on laughter held in Saarbruecken (2007), Berlin (2009), Dublin (2012) and Enschede (2015), we have the pleasure to announce a forthcoming workshop in Paris, France in September 2018.
Non-verbal vocalisations in human-human and human-machine interactions play important roles in displaying social and affective behaviors and in controlling the flow of interaction. Laughter, sighs, filled pauses, and short utterances such as feedback responses are among some of the non-verbal vocalisations that have been studied previously from various research fields. However, much is still unknown about the phonetic or visual characteristics of non-verbal vocalisations (production/encoding) and their relations to their intentions and perceived meanings (perception/decoding) in interaction.
The goal of this workshop is to bring together scientists from diverse research areas and to provide an exchange forum for interdisciplinary discussions in order to gain a better understanding of laughter and other non-verbal vocalisations. The workshop will consist of invited talks and oral presentations of ongoing research and discussion papers.
We invite contributions concerning laughter and other non-verbal vocalisations from the fields of phonetics, linguistics, psychology, conversation analysis, social signal processing, and human-machine/robot interaction. In particular, topics related to the following aspects are very much welcomed:
* Multimodal interaction: visual aspects of non-verbal vocalisations, e.g., smiles, relation between non-verbal vocalisations and visual behaviors * Social and affective behavior: decoding and encoding of emotion/socio-related states in non-verbal vocalisations * Conversation: (pragmatic) role of non-verbal vocalisations in dialog * Computation: automatic analysis and generation of non-verbal vocalisations
Submission procedure
Researchers are invited to submit an extended abstract of their work, including work in progress. Please send your extended abstract of max. 4 pages, 11pt font (including references) in PDF format to laughterworkshop2018@isir.upmc.fr. Each submission should follow the ACL style ? the author kits (LaTeX and Word) can be downloaded from the workshop web site. In the email, please include the name of the authors, their affiliations and the email address of the corresponding author, and a title of the abstract. Abstracts will undergo a review process performed by at least 2 reviewers. The submissions will be made available online.
Registration
Attendees are asked to register by sending an email to laughterworkshop2018 at isir dot upmc dot fr.
Important dates
* Abstract submission deadline: 26 May 2018 * Notification acceptance/rejection: 29 June 2018 * Registration deadline by email: 14 September 2018 * Workshop dates: 27-28 September 2018
Catherine Pelachaud, CNRS ? ISIR, Sorbonne University
Jonathan Ginzburg, University Paris Diderot
Jürgen Trouvain, Computational Linguistics and Phonetics, Saarland University Nick Campbell, School of Linguistic, Speech and Communication Sciences, Trinity College Dublin Khiet Truong, Human Media Interaction, University of Twente/Radboud University Dirk Heylen, Human Media Interaction, University of Twente
The 4th Experimental and Theoretical Advances in Prosody (ETAP4) conference will be held from October 11-13, 2018, at the University of Massachusetts Amherst in Amherst, Massachusetts. This conference focuses on questions about the production, interpretation, and characterization of speech prosody, bringing together researchers in linguistics, psychology, and computer science.
The theme of ETAP4 is ?Sociolectal and dialectal variability in prosody.? As in many language fields, studies of prosody have focused on majority languages and dialects and on speakers who hold power in social structures. The goal of ETAP4 is to help diversify prosody research in terms of the languages and dialects being investigated, as well as the social structures that influence prosodic variation. The conference will bring together prosody researchers and researchers exploring the role of sociological variation in prosody, with a focus on understudied dialects and endangered languages, and individual differences based on gender and sexuality. Invited speakers will (i) raise what questions and areas they think would benefit from prosodic research, (ii) teach prosody researchers what they need to know to do research in these areas, and (iii) share insights from their experience engaging with the public around issues of understudied and endangered languages, linguistic bias, and intersectionality in science.
A satellite workshop on African-American English prosody will be held on October 10, 2018 to bring together participants to contribute common data sets and discuss the development of shared data resources and methodological considerations such as challenges in prosodic transcription. For updates on this workshop, subscribe to the e-mail list here: https://list.umass.edu/mailman/listinfo/etap4-aae/
We invite submission of abstracts describing work related to the conference theme as well as topics in prosody more generally from diverse approaches, including fieldwork, experiments, computational modeling, theoretical analyses, etc. These topics include:
- Phonology and phonetics of prosody - Cognitive processing and modelling of prosody - Tone and intonation - Acquisition of prosody - Interfaces with syntax, semantics, pragmatics - Prosody in natural language processing
In addition to talks from invited speakers, there will be additional talks, and two poster sessions.
Abstracts for talks and posters must be submitted in a pdf format. Your abstract must include the submission?s title at the top, and must not include authors? names and affiliations, or any identifying information (i.e., ?In Liberman & Pierrehumbert (1984), we showed...?). Abstracts should be submitted in letter format (8.5' x 11' - not A4), with 1-inch margins on all sides, and in Arial 11 point font. The abstract itself (text) may be no longer than one page; a second page containing additional figures, tables, other graphics and/or references may be included.
(2018-10-15) 6th INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING, Mons, Belgique
6th INTERNATIONAL CONFERENCE ON STATISTICAL LANGUAGE AND SPEECH PROCESSING
SLSP 2018 Mons, Belgium October 15-17, 2018 Co-organized by: NUMEDIART Institute University of Mons LANGUAGE Institute University of Mons Institute for Research Development, Training and Advice (IRDTA), Brussels/London http://slsp2018.irdta.eu/ ********************************************************************************** AIMS: SLSP is a yearly conference series aimed at promoting and displaying excellent research on the wide spectrum of statistical methods that are currently in use in computational language or speech processing. It aims at attracting contributions from both fields. Though there exist large, well-known conferences and workshops hosting contributions to any of these areas, SLSP is a more focused meeting where synergies between subdomains and people will hopefully happen. In SLSP 2018, significant room will be reserved to young scholars at the beginning of their career and particular focus will be put on methodology. VENUE: SLSP 2018 will take place in Mons, which was European Capital of Culture in 2015. The venue will be: University of Mons 31 Bvd Dolez, 7000 Mons Belgium SCOPE: The conference invites submissions discussing the employment of statistical models (including machine learning) within language and speech processing. Topics of either theoretical or applied interest include, but are not limited to: anaphora and coreference resolution authorship identification, plagiarism and spam filtering computer-aided translation corpora and language resources data mining and semantic web information extraction information retrieval knowledge representation and ontologies lexicons and dictionaries machine translation multimodal technologies natural language understanding neural representation of speech and language opinion mining and sentiment analysis parsing part-of-speech tagging question-answering systems semantic role labelling speaker identification and verification speech and language generation speech recognition speech synthesis speech transcription spelling correction spoken dialogue systems term extraction text categorisation text summarisation user modeling STRUCTURE: SLSP 2018 will consist of: invited talks peer-reviewed contributions posters INVITED SPEAKERS: Thomas Hain (University of Sheffield), Crossing Domains in Automatic Speech Recognition Simon King (University of Edinburgh), Does 'End-to-End' Speech Synthesis Make any Sense? Isabel Trancoso (Instituto Superior Técnico, Lisbon), Analysing Speech for Clinical Applications PROGRAMME COMMITTEE: Steven Abney (University of Michigan, US) Srinivas Bangalore (Interactions LLC, US) Jean-François Bonastre (University of Avignon et Pays du Vaucluse, FR) Pierrette Bouillon (University of Geneva, CH) Nicoletta Calzolari (Italian National Research Council, IT) Erik Cambria (Nanyang Technological University, SG) Kenneth W. Church (Baidu Research, US) Walter Daelemans (University of Antwerp, BE) Thierry Dutoit (University of Mons, BE) Marcello Federico (Bruno Kessler Foundation, IT) Robert Gaizauskas (University of Sheffield, UK) Ralph Grishman (New York University, US) Udo Hahn (University of Jena, DE) Siegfried Handschuh (University of Passau, DE) Mark Hasegawa-Johnson (University of Illinois, Urbana?Champaign, US) Keikichi Hirose (University of Tokyo, JP) Julia Hirschberg (Columbia University, US) Nancy Ide (Vassar College, US) Gareth Jones (Dublin City University, IE) Philipp Koehn (University of Edinburgh, UK) Haizhou Li (National University of Singapore, SG) Carlos Martín-Vide (Rovira i Virgili University, ES, chair) Yuji Matsumoto (Nara Institute of Science and Technology, JP) Alessandro Moschitti (Qatar Computing Research Institute, QA) Hermann Ney (RWTH Aachen University, DE) Jian-Yun Nie (University of Montréal, CA) Elmar Nöth (University of Erlangen-Nuremberg, DE) Cecile Paris (CSIRO Data61, AU) Jong C. Park (Korea Advanced Institute of Science and Technology, KR) Alexandros Potamianos (National Technical University of Athens, GR) Paul Rayson (Lancaster University, UK) Mats Rooth (Cornell University, US) Paolo Rosso (Technical University of Valencia, ES) Alexander Rudnicky (Carnegie Mellon University, US) Tanja Schultz (University of Bremen, DE) Holger Schwenk (Facebook AI Research, FR) Vijay K. Shanker (University of Delaware, US) Richard Sproat (Google Research, US) Tomoki Toda (Nagoya University, JP) Gökhan Tür (Google Research, US) Yorick Wilks (Institute for Human & Machine Cognition, US) Phil Woodland (University of Cambridge, UK) Dekai Wu (Hong Kong University of Science and Technology, HK) Junichi Yamagishi (University of Edinburgh, UK) ORGANIZING COMMITTEE: Stéphane Dupont (Mons) Thierry Dutoit (Mons, co-chair) Kévin El Haddad (Mons) Kathy Huet (Mons) Sara Morales (Brussels) Manuel J. Parra Royón (Granada) Gueorgui Pironkov (Mons) David Silva (London, co-chair) SUBMISSIONS: Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (all included) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). Submissions have to be uploaded to: https://easychair.org/conferences/?conf=slsp2018 PUBLICATIONS: A volume of proceedings published by Springer in the LNCS/LNAI series will be available by the time of the conference. A special issue of Computer Speech and Language (Elsevier, JCR 2016 impact factor: 1.900) will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation. REGISTRATION: The registration form can be found at: http://slsp2018.irdta.eu/Registration.php DEADLINES (all at 23:59 CET): Paper submission: May 27, 2018 Notification of paper acceptance or rejection: July 3, 2018 Final version of the paper for the LNCS/LNAI proceedings: July 13, 2018 Early registration: July 13, 2018 Late registration: October 1, 2018 Submission to the journal special issue: January 17, 2019 QUESTIONS AND FURTHER INFORMATION: david@irdta.eu ACKNOWLEDGMENTS: Université de Mons Institute for Research Development, Training and Advice (IRDTA), Brussels/London
The 20th International Conference on Multimodal Interaction (ICMI 2018) will be held in Boulder, Colorado. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
ICMI 2018 is pleased to announce that six workshops have been confirmed and will run immediately prior to the main conference on October 16th, 2018. Please consider submitting your latest work to these exciting emerging venues.
***********
3rd International Workshop on Multi-sensorial Approaches to Human-Food Interaction (MHFI 2018)
Abstract
There is a growing interest in the context of Human-Food Interaction to capitalize on multisensory interactions in order to enhance our food- and drink- related experiences. This, perhaps, should not come as a surprise, given that flavour, for example, is the product of the integration of, at least, gustatory and (retronasal) olfactory, and can be influenced by all our senses. Variables such as food/drink colour, shape, texture, sound, and so on can all influence our perception and enjoyment of our eating and drinking experiences, something that new technologies can capitalize on in order to ?hack? food experiences.
In this 3rd workshop on Multi-Sensorial Approaches to Human-Food Interaction, we again are calling for investigations and applications of systems that create new, or enhance already existing, eating and drinking experiences (?hacking? food experiences) in the context of Human-Food Interaction. Moreover, we are interested in those works that are based on the principles that govern the systematic connections that exist between the senses. Human Food Interaction also involves the experiencing food interactions digitally in remote locations. Therefore, we are also interested in sensing and actuation interfaces, new communication mediums, and persisting and retrieving technologies for human food interactions. Enhancing social interactions to augment the eating experience is another issue we would like to see addressed in this workshop.
The Group Interaction Frontiers in Technology (GIFT) workshop aims to bring together researchers from diverse fields related to group interaction, team dynamics, people analytics, multi-modal speech and language processing, social psychology, and organizational behaviour. The workshop will provide a unique opportunity to researchers to share their knowledge and gain insights outside their respective fields and will hopefully lead to inter-disciplinary networking and fruitful collaboration.
Modeling Cognitive Processes from Multimodal Data (MCPMD)
Abstract
Multimodal signals allow us to gain insights about internal cognitive processes of a person, for example: Speech and gesture analysis yield cues about hesitations, knowledgeability, or alertness, eye tracking yields information about a person's focus of attention, task, or cognitive state, EEG yields information about a person's cognitive load or information appraisal. Capturing cognitive processes is an important research tool to understand human behavior as well as a crucial part of a user model to an adaptive interactive system such as a robot or a tutoring system. As cognitive processes are often multifaceted, a comprehensive model requires the combination of multiple complementary signals.
Human-Habitat for Health (H3): Human-habitat multimodal interaction for promoting health and well-being in the Internet of Things era
Abstract
In the Internet of Things (IoT) era, digital human interaction with the habitat environment can be perceived as the continuous interconnection and exchange of cognitive, social, and affective signals between an individual or a group, and any type of environment built for humans (e.g., home, work, clinic). Through the integration of various interconnected devices (e.g., built-in microphones of home devices, acceleration, GPS, and physiological sensors embedded in smartphones or wearable devices, proximity sensors installed in smart objects), we can collect multimodal data including speech, spoken content, physiological, psychophysiological, and environmental signals, that enable the sensing of a person?s activity, mood, emotions, preferences, and/or health state, and ultimately provide appropriate feedback. Applications of these include artificial conversational agents (e.g., Amazon Alexa, Google Home) that enable voice powered human computer interaction to provide new information (e.g., nutritional food content, weather forecast) or conduct procedural tasks (e.g., update daily food intake diary, book a flight), in-the-moment automatic habitat adaptation systems that provide comfort and relaxation, human health and well-being support systems that are able to track the progress of a disease (e.g., depression tracking through linguistic and acoustic markers), detect high-risk episodes (e.g., suicidal tendencies), and ultimately provide feedback (e.g., guide individuals through a brief intervention) or take appropriate action (e.g., call 911). Special focus will be given on the technical considerations and challenges involved in these tasks ranging from the nature of the acquired data (e.g., noise, lack of structure, issues of multi-sensory integration) to the high variability present in habitat environments (e.g., different lighting conditions, room acoustic characteristics), and the inherent unpredictability and multi-faceted nature of human behavior. The H3 workshop aims to bring together experts from academia and industry spanning a set of multi-disciplinary fields, including computer science, speech and spoken language understanding, construction science, life-sciences, health sciences, and psychology, to discuss their respective views of the problem and identify synergistic and converging solutions.
Leah Stein Duker, Assistant Professor of Research, Occupational Science and Therapy, University of Southern California (lstein@chan.usc.edu)
Amir Behzadan, Associate Professor, Construction Science, Texas A&M University (abehzadan@tamu.edu)
***********
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction (MA3HMI)
Abstract
One of the aims in building multimodal user interfaces and combining them with technical devices is to make the interaction between user and system as natural as possible in a situation as natural as possible. The most natural form of interaction can be considered how we interact with other humans. Although technology is still far from being human-like, and systems can reflect a wide range of technical solutions. They are often represented as artificial agents to facilitate smooth interactions. While the analysis of human-human communication has resulted in many insights, transferring these to human-machine interactions remains challenging especially if multiple possible interlocutors are present in a certain area. This situation requires that multimodal inputs from the main speaker (e.g., speech, gaze, facial expressions) as well as possible co-speaker are recorded and interpreted. This interpretation has to occur at both the semantic and affective levels, including aspects such as the personality, mood, or intentions of the user, anticipating the counterpart. These processes have to be ideally performed in real-time in order for the system to respond without delays, in a natural environment. Therefore, the MA3HMI workshop aims at bringing together researchers working on the analysis of multimodal data as a means to develop technical devices that can interact with humans. In particular, artificial agents can be regarded in their broadest sense, including virtual chat agents, empathic speech interfaces and life-style coaches on a smart-phone. We focus on the environment and situation an interaction is situated in extending the investigations on real-time aspects of human-machine interaction. We address the synergy of situation, context, and interaction history in the development and evaluation of multimodal, real-time systems.
Ronald Böck - Otto von Guericke University Magdeburg, Germany
Francesca Bonin - IBM Research, Ireland
Nick Campbell - Trinity College Dublin, Ireland
Ronald Poppe - Utrecht University, The Netherlands
***********
Cognitive Architectures for Situated Multimodal Human Robot Language Interaction
Abstract
In many application fields of human robot interaction, robots need to adapt to changing contexts and thus be able to learn tasks from non-expert humans through verbal and non-verbal interaction. Inspired by human cognition, we are interested in various aspects of learning, including multimodal representations, mechanisms for the acquisition of concepts (words, objects, actions), memory structures etc., up to full models of socially guided, situated, multimodal language interaction. These models can then be used to test theories of human situated multimodal interaction, as well as to inform computational models in this area of research. In the Workshop on Cognitive Architectures for Situated Multimodal Human Robot Language Interaction, we focus on robot action and object learning from multimodal-interaction with a human tutor. Inspired by human cognition, the research interests of this workshop tackle different aspects of robot learning, such as (i) the kind of data used to develop socially guided models of language acquisition, (ii) the collection and preprocessing of empirical data to develop cognitively inspired models of language acquisition, (iii) the multimodal complexity of human interaction, (iv) multimodal models of language learning, and (v) adequate machine learning approaches to handle these high dimensional data. The workshop aims at bringing together linguists, computer scientists, cognitive scientists, and psychologists with a particular focus on embodied models of situated natural language interaction and the challenges will be discussed under a multidisciplinary perspective.
(2018-10-16) 4th International Workshop on Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction, Boulder, Colorado, USA
4th International Workshop on
Multimodal Analyses enabling Artificial Agents in Human-Machine Interaction
(MA3HMI 2018)
October 16th, 2018 in Boulder, USA.
In conjunction with ICMI2018.
http://MA3HMI.cogsy.de
Scope:
One of the aims in building multimodal user interfaces and combining them with technical devices is to make the interaction between user and system as natural as possible. The most natural form of interaction may be how we interact with other humans. Although technology is still far from human-like, and systems can reflect a wide range of technical solutions. They are often represented as artificial agents to facilitate smooth inter-actions. While the analysis of human-human communication has resulted in many insights.
Transferring these to human-machine interactions remains challenging especially if multiple possible interlocutors are present in a certain area. This situation requires that multimodal inputs from the main speaker (e.g., speech, gaze, facial expressions) as well as possible co-speaker are recorded and interpreted. This interpretation has to occur at both the semantic and affective levels, including aspects such as the personality, mood, or intentions of the user, anticipating the counterpart. These processes have to be performed in real-time in order for the system to respond without delays, in a natural environment.
The MA3HMI workshop aims at bringing together researchers working on the analysis of multimodal data as a means to develop technical devices that can interact with humans. In particular, artificial agents can be regarded in their broadest sense, including virtual chat agents, empathic speech interfaces and life-style coaches on a smart-phone. More general, multimodal analyses support any technical system being located in the research area of human-machine interaction. For the 2018 edition, we focus on the environment and situation an interaction is situated in extending the investigations on real-time aspects of human-machine interaction. We address the synergy of situation, context, and interaction history in the development and evaluation of multimodal, real-time systems.
We solicit papers that concern the different perspectives of such human-machine interaction. Tools and systems that address real-time conversations with artificial agents and technical systems are also within the scope of the workshop.
Topics (but not limited to):
a) Multimodal Environment Analyses
- Multimodal understanding of situation and environment of natural interactions
- Annotation paradigms for user analyses in natural interactions
- Novel strategies of human-machine interaction in terms of situation and environment
b) Multimodal User Analyses
- Multimodal understanding of user behavior and affective state
- Dialogue management using multimodal output
- Multimodal understanding of multiple users behavior and affective
- Annotation paradigms for user analyses in natural interactions
- Novel strategies of human-machine interactions
c) Applications, Tools, and Systems
- Novel application domains and embodied interaction
- Prototype development and uptake of technology
- User studies with (partial) functional systems
- Tools for the recording, annotation and analysis of conversations
Important Dates:
Submission Deadline: July 30th, 2018
Notification of Acceptance: September 10th, 2018
Camera-ready Deadline: September 15th, 2018
Workshop Date: October 16th, 2018
Submissions:
Prospective authors are invited to submit full papers (8 pages) and short papers (5 pages) in ACM format as specified by ICMI 2018. Accepted papers will be published as post-proceedings in the ACM Digital Library. All submissions should be anonymous.
The goal of the ICMI Doctoral Consortium is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces. The Consortium will be held on October 16, 2018. We expect to provide economic support to most attendees that will cover part of their costs (travel, registration, meals etc.).
Who should apply?
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply.
Submission Guidelines
Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials:
1) Extended Abstract: A four-page description of your PhD research plan and progress in the ACM SigConf format. Your extended abstract should follow the same outline, details, and format of the ICMI short papers. The submissions will not be anonymous. In particular, it should cover:
- The key research questions and motivation of your research,
- Background and related work that informs your research,
- A statement of hypotheses or a description of the scope of the technical problem,
- Your research plan, outlining stages of system development or series of studies,
- The research approach and methodology,
- Your results to date (if any) and a description of remaining work,
- A statement of research contributions to date (if any) and expected contributions of your PhD work.
2) Advisor Letter: A one-page letter of nomination from the student's PhD advisor. This letter is not a letter of support. Instead, it should focus on the student's PhD plan and how the Doctoral Consortium event might contribute to the student's PhD training and research.
3) CV: A two-page curriculum vitae of the student.
All materials should be prepared in PDF format and submitted through the ICMI submission system.
Review Process
The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women and other underrepresented groups are especially encouraged to apply.
Financial Support
We hope to provide most student attendees with partial financial support to cover the costs of attending the Doctoral Consortium and the conference. However, the details on the number of students to be funded and funding coverage is currently unknown, as we are currently working on raising funds. More detail on travel support will be announced on the Doctoral Consortium page of the main conference website.
Attendance
All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their PhD work as a short talk at the Consortium and as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines for the poster session will be available after the camera-ready deadline.
- Presentation format: Talk on consortium day and participation in the conference poster session
- Proceedings: Included in conference proceedings and ACM Digital Library
- Doctoral Consortium Co-chairs: Roland Goecke (U Canberra) and Yelin Kim (SUNY Albany)
Dates
Submission deadline: EXTENDED to June 25th 2018
Notifications: July 20th 2018
Camera-ready deadline: July 31st 2018
Doctoral Consortium Date: October 16th 2018
Questions?
For more information and updates on the ICMI 2018 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website (https://icmi.acm.org/2018/index.php?id=cfdc)
For further questions, contact the Doctoral Consortium co-chairs:
We are calling for participation in the 8th Audio/Visual Emotion Challenge and Workshop (AVEC 2018), an ACM MM Challenge Workshop themed around two topics: for the first time in a challenge bipolar disorder and emotion recognition. Bipolar disorder (BD) is a serious mental health disorder, with patients experiencing either manic or depressive episodes. Those with BD tend to live with this long-term. The purpose of the Audio/Visual Emotion Challenge and Workshop (AVEC) series is to bring together multiple communities from different disciplines, in particular the audio-visual multimedia communities and those in the psychological and social sciences who study expressive behaviour and emotion. The AVEC 2018 challenge theme is on Bipolar disorder and Cross-cultural emotion, and it is the eighth competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual, and audiovisual health and emotion analysis, with all participants competing under strictly the same conditions. It introduces major novelties this year with three separated sub-challenges:
Bipolar Disorder Sub-challenge (BDS) ? participants have to classify patients suffering from bipolar disorder into remission, hypo-mania and mania, as defined by the young mania rating scale, from audio-visual recordings of structured interviews (BD corpus); performance is measured by the unweighted average recall over the three classes.
Cross-cultural Emotion Sub-challenge (CES) ? participants have to predict the level of three emotional dimensions (arousal, valence, and likability) time-continuously in a cross-cultural setup (German => Hungarian) from audio-visual recordings of dyadic interactions (SEWA corpus); performance is the concordance correlation coefficient (CCC) averaged over the dimensions.
Gold-standard Emotion Sub-Challenge (GES) ? participants have to generate a gold-standard (i.e., a single time series of emotion labels) from individual ratings of emotional dimensions (arousal, valence) that will be evaluated by a multimodal (audio, video, physiology) emotion recognition system from recordings of dyadic interactions (RECOLA corpus); performance is the concordance correlation coefficient (CCC) averaged over the dimensions.
In order to participate in the Challenge, please register your team by following the challenge guidelines.
We encourage both - contributions aiming at highest performance w.r.t. the baselines provided by the organisers, and contributions aiming at finding new and interesting insights w.r.t. these challenges. Besides participation in the challenge, we are also encouraging submissions of original contributions on the following topics (not limited to):
Multimodal Affect Sensing
Audio-based Health/Emotion Recognition
Video-based Health/Emotion Recognition
Physiological-based Health/Emotion Recognition
Multimodal Representation Learning
Semi-supervised and Unsupervised Learning
Multi-view learning of Multiple Dimensions
Personalised Health/Emotion Recognition
Context in Health/Emotion Recognition
Multiple Rater Ambiguity and Asynchrony
Application
Multimedia Coding and Retrieval
Mobile and Online Applications
Important Dates
Paper submission: June 30, 2018
Notification of acceptance: July 31, 2018
Camera ready paper: August 14, 2018
Workshop: October 22-26, 2018 (to be communicated)
Organisers
Fabien Ringeval, Université Grenoble Alpes, CNRS, France
Björn Schuller, Imperial College London/University of Augsburg, UK/Germany
Michel Valstar, University of Nottingham, UK
Roddy Cowie, Queen?s University Belfast, UK
Maja Pantic, Imperial College London/University of Twente, UK/The Netherlands
'Creole Worlds, Creole Languages and Development: Educational, Cultural and Economic Challenges'
28 October 2018 - 3 November 2018, Mahé, Seychelles
The International Committee for Creole Studies (Comité International des Etudes Créoles (CIEC)) has organized International Conferences on Creole Studies for the past fifty years, at regular intervals. In 2018, the XVIth International Conference of Creole Studies will be held in Seychelles; the organization has been entrusted to the University of Seychelles in liaison with the CIEC.
Context
The international community (UNESCO, UNDP etc.) and the Organization Internationale de la Francophonie (OIF) support the educational linguistic policy and the possible institutionalization of Creole languages in the dozen of Creole-speaking countries (France and its Departments, Haiti, Dominica, Mauritius, Saint Lucia, Seychelles, Cape Verde, Guinea-Bissau, San Tome and Principe) that are members of OIF. Creole studies are called upon to contribute decisively to these programs and endeavours.
The importance of Creole studies stems primarily from its contributions to the linguistic, cultural and social development of Creole -speaking societies. Beyond, the study of the genesis and development of Creole social, linguistic and cultural systems constitutes a remarkable field of study for human and social sciences, because 'Creole' societies have been formed recently (three to four centuries of existence as a rule) and because of how they are composed and evolve.
Presentation
The XVIth International Symposium on Creole Studies will focus on:
'Creole Worlds, Creole Languages, Development: Educational, Cultural and Economic Challenges'.
This theme invites philosophers, historians, anthropologists, economists, sociologists, linguists and other researchers in human and social sciences to present their work on contemporary Creole societies in their historical, linguistic, social, political, economic and cultural evolution.
The focus of the colloquium will be on the following four major themes:
A. Creole languages and education
B. Creole Worlds and their Cultural and Economic Challenges of Development
C. Creole languages in a multilingual environment: description and analysis of the dynamics of Creole languages
D. Creole grammar: typology, variation and teaching
Presentation of the themes of the Conference
A. Creole languages and education
Faced with the challenges of education for all, in basic and middle schools, sovereign countries that use a French Creole language have introduced some measure of Creole language teaching in their schools. Some states, such as Seychelles or Haiti, have acquired a vast experience in the domain that should be examined. Mauritius has recently also embarked on this venture which calls for evaluation. The Creole-speaking Outremer Departments, whose creoles are recognized regional languages of France and which benefit from the texts regulating the teaching of regional languages in France, have also many educational practices to share.
B. Creole Worlds and their Cultural and Economic Challenges of Development
Anthropology and the history of Creole worlds are called upon to account for how the creole-speaking social formations, resulting from European colonial expansion, are facing the challenges of development and globalization.
The role of Creole languages in the development of economy (tourism, reception of migrants, etc.) has to be assessed.
Literary production in the Creole speaking islands of the Caribbean and the Indian Ocean has developed greatly in recent years in French and English as well as in Creole languages. The study of this renewal of literature and cultural practices also forms part of theme B.
The migratory movements of creole speakers (see also topic C) will also be discussed.
What are the paths of the institutionalization of the Creole languages in their respective areas of influence (see the question of Creole language academies)? Creole militant practices may also be mentioned.
C. Creole languages in a multilingual environment: description and analysis of the dynamics of Creole languages.
Recent globalization have caused many displacements of Creole-speaking populations towards more developed economic zones. New Creole-speaking communities have thus been created outside the territories of birth, such as Haitian communities in North America, populations from the Creole speaking Departments in metropolitan France, Mauritians in Australia and Seychellois in the United Kingdom. Creole speaking newcomers are found in prosperous creole-speaking areas, for instance, Haitians in Guyana and elsewhere in the Caribbean.Immigration to Creole-speaking areas also leads to the emergence of neo-learners of Creole languages. Globalization has led to an unprecedented diffusion of Creole languages, including via language and culture industries. These new sociolinguistic situations of diffusion have hardly been described to date. Similarly, little is known about the impact of these migratory movements on the dynamics of Creole languages. To these themes may be added the study of the genesis and evolution of Creole languages.
D. Creole grammar: typology, variation and teaching
The description of Creole language systems (phonology, grammar) remains necessary. The analysis of the variation of Creole languages and of their linguistic systems is still unsatisfactory. This theme should bring together contributions that attempt to analyze and explain phonological, morphological and grammatical systems in a typological perspective.
This theme may also include work on grammar for teaching. Indeed, in Haiti, the Seychelles and Mauritius, as in the French DROMs, questions arise concerning 'grammar models' and the use of linguistic analyses for teacher training and for teaching of Creole languages as first languages.
Questions
Topics that could be addressed, either in the form of individual papers or as workshops (please contact the organizers), include the following:
- 'Creole' diasporas and their linguistic practices
- Creole varieties developed outside the territories of birth
- The linguistic varieties of neo-learners of Creole languages
- The co - presence of Creole and French
- The development of literacy programs in Creole
- Bilingual education programs integrating the Creole language
- Literatures of Creole-speaking countries
- The state of research on Creole language corpora
- Creole development at school
- Morphology, Syntax etc. of creole languages
- The diachronic studies of Creole languages
- Relations between Creole languages and languages of the slave population (African languages, Malagasy, etc.)
- Creole history, landscape and society
- Creolization and the development of Creole societies
- Philosophy and history of ideas in Creole societies.
Scientific Committee of the XVIth International Conference of the CIEC
Enoch Aboh, Christian Barat, Arnaud Carpooran, Penda Choppy, Guillaume Fon Sing, Renaud Govain, Marie-reine Hoareau, Thom Klingler, Sibylle Kriegel, Ralph Ludwig, Carpanin Marimoutou, Salikoko Mufwene, Joelle Perreau, Laurence Pourchez, Lambert-Félix Prudent, Gillette Staudacher-Valliamee, Albert Valdman, Justin Valentin, Daniel Véronique
Organization and timetable
The papers and proposals for workshops may be included in one of the themes of the Conference and / or in a cross-cutting theme.
Proposals for papers or workshops (groupings of 3/4 papers) written in French, English or any French Creole language, with the address and institutional affiliation of the communicant (s) must reach the following e-mail address: Ciec.Sez2018@gmail.combefore 15 January 2018.
The abstracts will describe the theme of the paper, the database, the results expected and will not exceed 3,000 characters or 500 words (including bibliography). Submit 2 copies of the proposal, one anonymous (which will be used for the review), the other with the author's name, address and institutional affiliation.
After evaluation, acceptance or refusal of the proposal will be notified as from the 9 April 2018.
The 11th International Conference on Natural Language Generation (INLG 2018) will be held in Tilburg, The Netherlands, November 5-8, 2018. The conference takes place immediately after EMNLP 2018, organised in nearby Brussels, Belgium.
We invite the submission of long and short papers, as well as system demonstrations, related to all aspects of Natural Language Generation (NLG), including data-to-text, concept-to-text, text-to-text and vision-to-text approaches. Accepted papers will be presented as oral talks or posters.
Important dates
- Deadline for submissions: July 9, 2018
- Notification: September 7, 2018
- Camera ready: October 1, 2018
- INLG 2018: November 5-8, 2018
All deadlines are at 11.59 PM, UTC-8.
Topics
INLG 2018 solicits papers on any topic related to NLG. The conference will include two special tracks:
(2) Conversational Interfaces, Chatbots and NLG (organised in collaboration with flow.ai).
General topics of interest include, but are not limited to:
- Affect/emotion generation
- Applications for people with disabilities
- Cognitive modelling of language production
- Content and text planning
- Corpora for NLG
- Deep learning models for NLG
- Evaluation of NLG systems
- Grounded language generation
- Lexicalisation
- Multimedia and multimodality in generation
- Storytelling and narrative generation
- NLG and accessibility
- NLG in dialogue
- NLG for embodied agents and robots
- NLG for real-world applications
- Paraphrasing and Summarisation
- Personalisation and variation in text
- Referring expression generation
- Resources for NLG
- Surface realisation
- Systems architecture
A separate call for workshops and generation challenges will be released soon.
Submissions & Format
Submissions should follow the new ACL Author Guidelines and policies for submission, review and citation, and be anonymised for double blind reviewing. ACL 2018 offers both LaTeX style files and Microsoft Word templates Papers should be submitted electronically through the START conference management system (to be opened in due course).
Three kinds of papers can be submitted:
- Long papers are most appropriate for presenting substantial research results and must not exceed eight (8) pages of content, with up to two additional pages for references.
- Short papers are more appropriate for presenting an ongoing research effort and must not exceed four (4) pages, with up to one extra page for references.
- Demo papers should be no more than two (2) pages in length, including references, and should describe implemented systems which are of relevance to the NLG community. Authors of demo papers should be willing to present a demo of their system during INLG 2018.
All accepted papers will be published in the INLG 2018 proceedings and included in the ACL anthology. A paper accepted for presentation at INLG 2018 must not have been presented at any other meeting with publicly available proceedings. Dual submission to other conferences is permitted, provided that authors clearly indicate this in the 'Acknowledgements' section of the paper when submitted. If the paper is accepted at both venues, the authors will need to choose which venue to present at, since they can not present the same paper twice.
Program chairs
- Emiel Krahmer, Tilburg University, The Netherlands
- Martijn Goudbeek, Tilburg University, The Netherlands
- Albert Gatt, Malta University, Malta
Workshop & Challenges chairs
- Sina Zarrieß, Bielefeld University, Germany
- Mariët Theune, University of Twente, The Netherlands
(2018-11-08) CfP Workshop on Prosody and Meaning: Information Structure and Beyond, Aix-en-Provence,France
Workshop on Prosody and Meaning: Information Structure and Beyond
Aix-Marseille Université (AMU), Aix-en-Provence, France, 8 November 2018
Call for Papers
We invite submissions for the Workshop Prosody and Meaning: Information Structure and Beyond, to be held at the Laboratoire Parole et Langage, Aix-Marseille Université (AMU), Aix-en-Provence, France, 8 November 2018.
Signaling the information structure of utterances has been shown to be one of the main dimensions of prosodic meaning in many languages, and remains a driving force behind the research on the typological variety of prosodic systems. Other aspects of prosodic meaning that have been investigated are the role of prosody in the generation of implicatures, in speech-act dynamics, in dialogue management, or in the marking of various kinds of questions, owing much to collaborations between phonologists and semanticists/pragmaticists. Other recent advances in the field are supported by the development of corpus resources and of new experimental methods for the investigation of the empirical validity of specific theoretical claims.
This workshop aims at bringing together theoretical and psycholinguists working on the prosody/meaning interface in different languages as well as computational linguists developing tools for prosody-meaning corpus annotation, exploration and processing.
Invited Speakers
Michael Wagner, McGill University
Pilar Prieto, ICREA-Universitat Pompeu Fabra
Topics
Topics include, but are not limited to:
- prosodic reflexes of information structure in different languages and their relationship with other grammatical reflexes of information structure (morphological or syntactical),
- the relationship between information structure, ellipsis or clause fragments and prosody,
- the interplay between information structure and other aspects of prosodic meaning such as speech acts, attitude signaling, or turn-taking management,
- more generally, the role of prosody in the management and interpretation of discourse and dialogue.
Submissions
We invite the submission of abstracts for oral or poster presentations. Abstracts should be anonymous, in English, and should not exceed one page (2.5 cm margins, 12pt font size), with an extra page for examples, figures and references.
Important dates
Abstract deadline: 27 May 2018
Notification of acceptance: 15 July 2018
Workshop: 8 November 2018
Organisers
Cristel Portes, Laboratoire Parole et Langage (LPL), Université d’Aix-Marseille (AMU),
Arndt Riester and Uwe Reyle, Institut für Maschinelle Sprachverarbeitung (IMS), Universität Stuttgart.
Scientific committee
Stefan Baumann (University of Cologne), Roxane Bertrand (CNRS, Aix-Marseille University), Bettina Braun (University of Constance), Daniel Büring (University of Wien), Sasha Calhoun (University of Wellington), Elisabeth Delais-Roussarie (CNRS, Université de Nantes), Kordula De Kuthy (University of Tübingen), Mariapaola D’Imperio (Aix-Marseille University), James German (Aix-Marseille University), Daniel Hole (University of Stuttgart), Frank Kügler (University of Cologne), Amandine Michelas (CNRS, Aix-Marseille University), Caterina Petrone (CNRS, Aix-Marseille University), Giuseppina Turco (CNRS, Université Paris Diderot), Pauline Welby (CNRS, Aix-Marseille University), Margaret Zellers (University of Kiel)
(2018-11-26) The 11th International Symposium on Chinese Spoken Language Processing (ISCSLP 2018), Taipei, Taiwan
(2018-11-26) The 11th International Symposium on Chinese Spoken Language Processing (ISCSLP 2018), Taipei, Taiwan
International Symposium on Chinese Spoken Language Processing (ISCSLP) is a biennial conference for scientists, researchers, and practitioners to report and discuss the latest progress in all theoretical and technological aspects of spoken language processing. Since 1998, it has been successfully held in Singapore (1998), Beijing (2000), Taipei (2002), Hong Kong, (2004), Singapore (2006), Kuming (2008), Tainan (2010), Hong Kong (2012), Singapore (2014), and Tianjin (2016). ISCSLP is the flagship conference of SIG-CSLP, ISCA.
The 11th International Symposium on Chinese Spoken Language Processing (ISCSLP 2018) will be held on November 26-29, 2018 in Taipei.
While ISCSLP is focused primarily on Chinese languages, works on other languages that may be applied to Chinese speech and language are also encouraged. The working language of ISCSLP is English.
Important dates
Feb 22, 2018 Submission of special session proposals
Apr 30, 2018 Submission of tutorial proposals
Jun 11, 2018 Submission of regular and special session papers
(2018-11-29) CfP Workshop on the Processing of Prosody across Languages and Varieties (ProsLang),Victoria University of Wellington, New Zealand (updated)
Workshop on the Processing of Prosody across Languages and Varieties (ProsLang) Victoria University of Wellington (VUW), New Zealand 29-30 November 2018 Call for Papers We invite submissions for the Workshop on the Processing of Prosody across Languages and Varieties (ProsLang), to be held at the School of Linguistics and Applied Language Studies, Victoria University of Wellington (VUW), New Zealand, 29-30 November 2018. The Workshop is coordinated with the 17th Speech Science & Technology Conference, University of New South Wales, Sydney, 4-7 December 2018. Aim As an integral part of spoken language, prosody has been shown to play an important role in many speech production and perception processes. However, our knowledge of the role of prosody in speech processing draws on a relatively narrow range of (mostly closely related) languages. There is an urgent need for more psycholinguistic research looking at commonalities and differences in the use of prosodic cues in speech processing across different languages, and also different varieties of major languages. This workshop aims to bring together researchers working in this area. We are particularly interested in research on: (i) the role of prosody in semantic interpretation, including information structure; and (ii) prosody as an organisational structure for speech production and perception, including multimodal perspectives. Invited Speakers Anne Cutler, MARCS, Western Sydney University Bettina Braun, Universität Konstanz Jennifer Cole, Northwestern University Janet Fletcher, University of Melbourne Nicole Gotzner, Leibniz-ZAS Berlin Topics Topics include, but are not limited to, cross-linguistic and cross-varietal commonalities and differences in: - the role of prosody in signalling information structure, particularly in the activation and resolution of contrast and contrastive alternatives - the integration of prosody and morphosyntactic cues in speech comprehension, e.g. as cues to information structure - the role of prosody in the management and interpretation of discourse - prosodic structure as an organisational frame in speech production or perception - links between prosodic structure and multimodal speech cues such as gesture Submissions We invite submissions of one-page abstracts following the guidelines on the Workshop website: https://proslang.wordpress.com/about/ *** Abstract deadline extended: 23 April 2018 *** Notification of acceptance: 30 April 2018 Workshop: 29-30 November 2018 Organisers Sasha Calhoun, Paul Warren, Olcay Türk, Mengzhu Yan, VUW; Janet Fletcher, University of Melbourne Please direct any enquiries about the Workshop to: proslangworkshop@gmail.com.
(2018-??-??) FIRST JOINT CALL for Workshop Proposals: ACL/COLING/EMNLP/NAACL 2018
FIRST JOINT CALL for Workshop Proposals: ACL/COLING/EMNLP/NAACL 2018
Proposal Submission Deadline: October 22, 2017
Notification of Acceptance: November 17, 2017
The Association for Computational Linguistics (ACL), the International Conference on Computational Linguistics (COLING), the Conference on Empirical Methods in Natural Language Processing (EMNLP), and the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL HLT) invite proposals for workshops to be held in conjunction with ACL 2018, COLING 2018, EMNLP 2018, or NAACL HLT 2018. We solicit proposals on any topic of interest to the ACL communities. Workshops will be held at one of the following conference venues:
ACL 2018 (the 56th Annual Meeting of the Association for Computational Linguistics) will be held in Melbourne, Australia, July 15 - July 20, 2018, with workshops to take place on July 19-20: http://acl2018.org/
COLING 2018 (the 27th International Conference on Computational Linguistics) will be held in Santa Fe, New Mexico, USA, August 20 - August 25, 2018, with workshops to be held on August 20-21, 2018: http://coling2018.org/
NAACL HLT 2018 (the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies) will be held in New Orleans, Louisiana, USA, June 1 - June 6, 2018 with workshops to be held on June 5-6, 2018: http://naacl2018.org/
EMNLP 2018 (the Conference on Empirical Methods in Natural Language Processing 2018) will be held later in 2018 (after the other three conferences). Exact details on dates and venue for EMNLP workshops will be announced later.
SUBMISSION INFORMATION
Proposals should be submitted as PDF documents. Note that submissions should essentially be ready to be turned into a Call for Workshop Papers within one week of notification (see Timelines below).
The proposals should contain:
- A title and brief (2-page max) description of the workshop topic and content.
- The names, affiliations, and email addresses of the organizers, with one-paragraph statements of their research interests, areas of expertise, and experience in organising workshops and related events.
- A list of Programme Committee members, with an indication of which members have already agreed. It is highly desirable for proposals to have at least 75% of the Programme Committee reviewers confirmed at the time of the submission. Organizers should do their best to estimate the number of submissions (especially for recurring workshops) in order to: (a) ensure a sufficient number of reviewers so that each paper receives 3 reviews, and (b) anticipate that no one is committed to reviewing more than 3 papers. This practice is likely to ensure on-time, and more thorough and thoughtful reviews.
- A list of invited speakers, if applicable, with an indication of which ones have already agreed and which are indicative, and sources of funding for the speakers.
- An estimate of the number of attendees.
- A description of any shared tasks associated with the workshop, and estimate of the number of participants.
- A description of special requirements and technical needs.
- The preferred venue(s) (ACL/COLING/NAACL/EMNLP), if any, and description of any constraints (e.g. if the workshop is compatible with only one of these events, logistically, thematically or otherwise)
- If the workshop has been held before, a note specifying where previous workshops were held, how many submissions the workshop received, how many papers were accepted (also specify if they were not regular papers, e.g. shared task system description papers), and how many attendees the workshop attracted.
Note that the only financial support available to workshops is a single free workshop registration for an invited speaker; all other costs must be borne independently by the workshop organizers.
In addition, you will need to specify the following information when you submit via the START System (not in the PDF proposal):
- A very brief advertisement or tagline for the workshop, up to 140 characters, that highlights any key information you wish prospective attendees to know, and which would be suitable to be put onto a web-based survey (see below).
- A URL for the workshop website which will be shown in the web-based survey.
- A list of organizers’ names which will be shown in the web-based survey.
The proposals should be submitted no later than October 22, 2018, 11:59 PM Samoa Standard Time (SST) (UTC/GMT-11). Submission is electronic, using the Softconf START conference management system at
The workshop proposals will be evaluated according to their originality and impact, as well as the quality of the organizing team and Programme Committee. In addition, to estimate the attendance of the different workshops, a new voting mechanism will be implemented, where attendees of ACL-affiliated events from the past 3-5 years will be able to vote on which workshops they would like to attend in 2018. (A representative prototype of the survey is shown here, but is subject to change: https://goo.gl/3cuZON.) The overall diversity of the workshops will also be taken into account to ensure the conference program is varied and balanced. The workshop co-chairs will work together to assign workshops to the four conferences, taking into account the location preferences and technical constraints provided by the workshop proposers.
Organizers of accepted proposals will be responsible for publicizing and running the workshop, including reviewing submissions, producing the camera ready workshop proceedings, and organizing the meeting days. It is crucial that organizers commit to all deadlines. In particular, failure to produce the camera ready proceedings on time will lead to the exclusion of the workshop from the unified proceedings and author indexes. Workshop organizers cannot accept submissions for publication that will be (or have been) published elsewhere, although they are free to set their own policies on simultaneous submission and review. Since the conferences will occur at different times, the timelines for the submission and reviewing of workshop papers, and the preparation of camera-ready copies, will be different for each conference. Suggested timelines for each of the conferences are given below. Workshop organizers should not deviate from this schedule unless absolutely necessary, and with explicit agreement from the relevant Workshop Chairs.
The ACL has a set of policies on workshops. You can find the ACL's general policies on workshops, the financial policy for workshops, and the financial policy for SIG workshops at:
(2019-08-04) International Conference on Phonetic Sciences, Melbourne, Australia
Don't miss your opportunity to be a part of ICPhS 2019!
Call for papers
Authors will be invited to submit papers in December 2018 on original, unpublished research in the phonetic sciences. Papers related to the Congress themes are especially welcome, but we welcome papers related to any of the following list of scientific areas below. The submission deadline will be 4 December 2018.
The organisers of the International Congress of Phonetic Sciences invite proposals for special sessions covering emerging topics, challenges, interdisciplinary research, or subjects that could foster useful debate in the phonetic sciences.
TheICPhS themes are ?Endangered Languages, and Major Language Varieties?. Special sessions related to these themes are especially welcome, but we are interested in proposals related to any of the scientific areas covered in the Congress. The submission deadline will be 30 April 2018.
There are opportunities for holding satellite meetings as well as workshops associated with ICPhS 2019. We invite those interested in arranging a satellite event to contact the organising committee now.
The scientific committee have put together a list of scientific areas for the 2019ICPhSprogram based on previous editions and current developments within phonetics
Please click on the button below to see the full list.
Located on the south-east coast of Australia, Melbourne has been voted The World?s Most Liveable City on a number of occasions.
Melbourne is a thriving and cosmopolitan city with a unique balance of graceful old buildings and stunning new architecture surrounded by parks and gardens.
Call for special sessions proposals Now open! Deadline for proposals 30 April 2018 Deadline for on-line full paper submission 4 December 2018 Registration opens Late 2018 Author notification deadline 15 February 2019 Congress Dates 4-10 August 2019
(2019-X-X) Dialog System Technology Challenge 7 (DSTC7)
Dialog System Technology Challenge 7 (DSTC7) Call for Participation: Data distribution has been started Website: http://workshop.colips.org/dstc7/index.html
========================================
Background ----------------- The DSTC shared tasks have provided common testbeds for the dialog research community since 2013.
From its sixth edition, it has been rebranded as 'Dialog System Technology Challenge' to cover a wider variety of dialog related problems.
For this year's challenge, we opened the call for track proposals and selected the following three parallel tracks by peer-reviews:
Participation is welcomed from any research team (academic, corporate, non-profit, government).
Important Dates ------------------------ - Jun 1, 2018: Training data is released - Sep 10, 2018: Test data is released - Sep 24, 2018: Entry submission deadline - Oct or Nov 2018: Paper submission deadline - Spring 2019: DSTC7 special session or workshop (venue: TBD)
DSTC7 Organizing Committee -------------------------------------------- - Koichiro Yoshino - Nara Institute of Science and Technology (NAIST), Japan - Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA - Julien Perez - Naver Labs Europe, France - Luis Fernando D'Haro - Institute for Infocomm Research (I2R), Singapore
DSTC7 Track Organizers ------------------------------------- Sentence Selection Track: - Lazaros Polymenakos - IBM Research, USA - Chulaka Gunasekara - IBM Research, USA - Walter S. Lasecki - University of Michigan, USA - Jonathan Kummerfeld - University of Michigan, USA
Sentence Generation Track: - Michel Galley - Microsoft Research AI&R, USA - Chris Brockett - Microsoft Research AI&R, USA - Jianfeng Gao - Microsoft Research AI&R, USA - Bill Dolan - Microsoft Research AI&R, USA
Audio Visual Scene-aware dialog (AVSD) Track: - Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA - Tim K. Marks - Mitsubishi Electric Research Laboratories (MERL), USA - Devi Parikh - Georgia Tech, USA - Dhruv Batra - Georgia Tech, USA
DSTC Steering Committee --------------------------------------- - Jason Williams - Microsoft Research (MSR), USA - Rafael E. Banchs - Institute for Infocomm Research (I2R), Singapore - Seokhwan Kim - Adobe Research, USA - Matthew Henderson - PolyAI, Singapore - Verena Rieser - Heriot-Watt University, UK
Contact Information --------------------------------------- Join the DSTC mailing list to get the latest updates about DSTC7:
- To join the mailing list: send an email to listserv@lists.research.microsoft.com and put 'subscribe DSTC' in the body of the message (without the quotes). - To post a message: send your message to dstc@lists.research.microsoft.com.
For specific enquiries about DSTC7: - Please feel free to contact any of the Organizing Committee members directly.