ISCA - International Speech
Communication Association


ISCApad Archive  »  2019  »  ISCApad #248  »  Events

ISCApad #248

Tuesday, February 12, 2019 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2019-09-11) CfSS SIGDIAL 2019, Stockholm, Sweden

SIGDIAL 2019

11-13 September 2019, Stockholm, Sweden

Second Call for Special Sessions

http://workshops.sigdial.org/conference20

 

Special Session Submission Deadline: January 28, 2019

Special Session Notification: February 18, 2019

The Special Interest Group on Dialogue and Discourse (SIGDIAL) organizers welcome the submission of special session proposals. A SIGDIAL special session is the length of a regular session at the conference, and may be organized as a poster session, a panel session, a poster session with panel discussion, or an oral presentation session. Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions.

The papers submitted to special sessions are handled by the special session organizers, but for the submitted papers to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers. The reviewers for the special session papers will be taken from the SIGDIAL program committee itself, taking into account the suggestions of the session organizers, and the program chairs will make acceptance decisions. In other words, special session organizers decide what appears in the session, while the program chairs decide what appears in the proceedings and the rest of the conference program.

We welcome special session proposals on any topic of interest to the discourse and dialogue communities. Topics of interest include, but are not limited to Explainable AI, Evaluation, Annotation, and End-to-end systems.

Submissions:

Those wishing to organize a special session should prepare a two-page proposal containing: a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate in the session; and a requested format (poster/panel/oral session).

These proposals should be sent to conference[at]sigdial.org by the special session proposal deadline. Special session proposals will be reviewed jointly by the general chair and program co-chairs.

Links:

Those wishing to propose a special session may want to look at some of the sessions organized at recent SIGDIAL meetings.

SIGDIAL 2019 Organizing Committee

General Chair:

Satoshi Nakamura, Nara Institute of Science and Technology, Japan

Program Chairs:

Milica Gaši

, Cambridge University, UK

Ingrid Zukerman, Monash University, Australia

 

Local Chair:

Gabriel Skantze, KTH, Sweden

 

Sponsorship Chair:

Mikio Nakano, Honda Research Institute Japan, Japan

 

Mentoring Chair:

Alex Papangelis, Uber AI, USA

 

Publication Chair:

Stefan Ultes, Daimler AG, Germany

 

Publicity Chair:

Koichiro Yoshino, Nara Institute of Science and Technology, Japan

 

SIGdial President:

Jason Williams, Apple, USA

 

SIGdial Vice President:

Kallirroi Georgila, University of Southern California, USA

 

SIGdial Secretary:

Vikram Ramanarayanan, Educational Testing Service (ETS) Research, USA

 

SIGdial Treasurer:

Ethan Selfridge, Interactions, USA

 

SIGdial President Emeritus:

Amanda Stent, Bloomberg, USA

Back  Top

3-1-2(2019-09-15) Welcome to INTERSPEECH 2019 (updated)

 

Welcome to INTERSPEECH 2019 ? Willkommen in Graz, Austria, Sept 15-19, 2019
http://www.interspeech2019.org

 

Important Dates Coming Up Soon

           

What is new?

For our Show & Tell Demonstrations we have evolved the submission format, next to a two-page description we will require the upload of a simple video of your demonstration that will help deciding which demonstrations will raise the highest interest at the conference. For more details see https://www.interspeech2019.org/calls/show_and_tell/.

INTERSPEECH 2019 is the 20th Annual Conference of the International Speech Communication Association ISCA and this anniversary edition will introduce several innovative features. These innovations will certainly contribute to raise the attractiveness of the conference beyond the high levels developed already over the past two decades:


Survey presentations have been evolved from the perspective talks at INTERSPEECH 2018 in Hyderabad. The presentations will be scheduled at the start of suitable oral presentation sessions, and will be allocated a 40-minute time slot for presentation and discussion. Presentations should aim to give an overview of the state-of-the-art for a specific topic covered by one or more of the main technical areas of the conference. The presenters will also be invited to submit survey papers to the ISCA supported journals of Computer, Speech and Language and Speech Communication. For further details, see our Call for Survey Presentations [please add link!] further below.

Childcare – the INTERSPEECH Kids: Bring your family along to Graz, Austria! Childcare will be provided free of charge for conference participants. For terms and conditions, see the conference webpage http://www.interspeech2019.org/venue_and_travel/childcare. To secure a place for your kids, we recommend to apply for this service at your earliest convenience.

Hackathons will set a stimulating atmosphere for the most creative developers in our community and beyond: through jams and challenges, we will bring together some of our brightest minds, students who are speech science and technology aficionados and who come to Graz as conference participants, as well as students who are representing the highly interdisciplinary community of several partner universities in Graz and beyond. Watch out for details to appear soon on our website http://www.interspeech2019.org or contact directly our hackathon chairs Elmar Nöth (FAU Erlangen-Nürnberg) and Johanna Pirker (TU Graz) at hackathon@interspeech2019.org.



Back  Top

3-1-3(2019-09-15) INTERSPEECH 2019, Graz, Austria (updated)

INTERSPEECH 2019

 GRAZ – AUSTRIA SEPTEMBER 15th – 19th 2019 WWW.INTERSPEECH2019.ORG

 https://interspeech2019.org

CROSSROADS OF SPEECH AND LANGUAGE 

CALL FOR PAPERS AND PROPOSALS FOR TUTORIALS, SPECIAL SESSIONS/CHALLENGES, AND SHOW & TELL

INTERSPEECH is the world‘s largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications. In addition to regular oral and poster sessions, INTERSPEECH 2019 will feature plenary talks by internationally renowned experts, tutorials, special sesssions and challenges, show & tell sessions, and exhibits. A number of satellite events will also take place around INTERSPEECH 2019.

 

Original papers are solicited in, but not limited to, the following areas:

1. Speech Perception, Production and Acquisition

2. Phonetics, Phonology, and Prosody

3. Analysis of Paralinguistics in Speech and Language

4. Speaker and Language Identification

5. Analysis of Speech and Audio Signals

6. Speech Coding and Enhancement

7. Speech Synthesis and Spoken Language Generation

8. Speech Recognition – Signal Processing, Acoustic Modeling, Robustness, an Adaptation

9. Speech Recognition – Architecture, Search, and Linguistic Components

10. Speech Recognition – Technologies and Systems for New Applications

11. Dialog Systems and Analysis of Conversation

12.Spoken Language Processing – Translation, Information Retrieval, Summarization,

 

   Resources, and Evaluation

 A complete list of the scientific areas and topics including special sessions is available at » www.interspeech2019.org

 

 

 PAPER SUBMISSION

Papers intended for INTERSPEECH 2019 should be up to four pages of text. An optional fifth page could be used for references only. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author’s kit on the conference webpage. Please be aware that INTERSPEECH 2019 will use new templates and submissions will be accepted only in the new format. Submissions may also be accompanied by additional files such as multimedia files, to be included on the proceedings’ USB drive. Authors must declare that their contributions are original and have not been submitted elsewhere for publication. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English.

 

We look forward to receiving your submissions and to your participation in INTERSPEECH 2019.


 IMPORTANT DATES

 » November 30, 2018 Special session/challenges proposals due

 » February 1, 2019 Tutorial proposals due

 » February 15, 2019 Submission portal opens

 » February 28, 2019 Satellite workshops/events proposals due

 » March 29, 2019 Paper submission deadline

 » April 5, 2019 Final paper submission deadline

 » April 26, 2019 Show & Tell proposals due

 » June 17, 2019 Acceptance/rejection notification

 » June 24, 2019 Registration opens

 » July 1, 2019 Camera-ready paper due

 

 General Chairs:

 Gernot Kubin, TU Graz, Austria

 Zdravko Kacˇicˇ, University of Maribor, Slovenia

 

Technical Chairs:

 Thomas Hain, U Sheffield, UK

 Björn Schuller, U Augsburg/Imperial College, Germany/UK

 

Organising Committee Members:

  Michiel Bacchiani, Google NY, USA

 Gerhard Backfried, Sail Labs Vienna, Austria

 Jamilla Balint, TU Graz, Austria

 Eugen Brenner, TU Graz, Austria

 Mariapaola D‘Imperio, Aix Marseille U, France

 Dina ElZarka, U Graz, Austria

 Tim Fingscheidt, TU Braunschweig, Germany

 Anouschka Foltz, U Graz, Austria

 Anna Fuchs, AVL Graz, Austria

 Panayiotis Georgiou, USC Los Angeles, USA

 Franz Graf, Joanneum Research Graz, Austria

 Markus Gugatschka, MU Graz, Austria

 Martin Hagmüller, TU Graz, Austria

 Petra Hödl, U Graz, Austria

 Robert Höldrich, KU Graz, Austria

 Mario Huemer, JKU Linz, Austria

 Dorothea Kolossa, RU Bochum, Germany

 Christina Leitner, Joanneum Research Graz, Austria

 Stefanie Lindstaedt, KNOW Centre Graz, Austria

 Helen Meng, CU Hong Kong, China

 Florian Metze, CMU Pittsburgh, USA

 Pejman Mowlaee, Widex/TU Graz, Denmark/Austria

 Elmar Noeth, FAU Erlangen-Nürnberg, Germany

 Franz Pernkopf, TU Graz, Austria

Ingrid Pfandl-Buchegger, U Graz, Austria

 Lukas Pfeifenberger, Ognios Salzburg, Austria

 Johanna Pirker, TU Graz, Austria

Christoph Prinz, Sail Labs Vienna, Austria

 Michael Pucher, ÖAW Vienna, Austria

 Philipp Salletmayr, Nuance Vienna, Austria

 Barbara Schuppler, TU Graz, Austria

 Dagmar Schuller, audEERING, Germany

Jessica Siddins, U Graz, Austria 

Wolfgang Wokurek, U Stuttgart, Germany

 Kai Yu, Shanghai Jiao Tong University, China

 

 




Back  Top

3-1-4(2019-09-15) INTERSPEECH 2019: Call for Show and Tell Demonstrations

Call for Show & Tell Demonstrations

INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing.

In addition to regular and special sessions, INTERSPEECH 2019 will feature sessions for Show & Tell demonstrations. Submission of Show & Tell proposals is encouraged for INTERSPEECH 2019 where participants are given the opportunity to demonstrate their most recent progress of developments, and interact with the conference attendees in an informal way, such as a demo, mock-up or any adapted format of their own choice. The contributions must highlight the innovative side of the concept and may relate to a regular paper. Demonstrations should be based on innovations and fundamental research in areas of speech communication, speech production, perception, acquisition, or speech and language technology and systems.

 

- Important Dates

* Submission deadline: Wednesday, April 26, 2019

* Acceptance/rejection notification: Monday, June 10, 2019

* Final paper and final video due: Monday, June 24, 2019

 

- Show & Tell Submission and Preparation Guidelines

Each initial Show & Tell submission must contain both a paper of up to 2 pages detailing the demonstration and a video illustrating what is going to be shown. The paper (including references) has to follow the format defined in the paper preparation guidelines as detailed in the 'INTERSPEECH 2019 Author's Kit'. Please note that the focus of the paper shall be on describing what the visitors will see and experience. The video can simply be recorded with a mobile phone or alike. Submissions will be evaluated by the organizing committee for relevance, originality, clarity, and significance of the proposed demonstration. At least one author of each accepted submission must register for and attend the conference, and demonstrate the system during the Show & Tell sessions.

Each accepted Show & Tell paper will be allocated two pages in the conference proceedings. Furthermore, a final video, which is to be submitted by the final paper deadline, will be made publicly available. Show & Tell demonstrations will be presented in their dedicated time slot in the conference program. Each presentation space includes

* one poster board

* one table

* wireless internet connection

* a power outlet

 

Please submit your proposal to the Show & Tell Chairs via show_and_tell@interspeech2019.org no later than April 26, 2019.

QUESTIONS? PLEASE CONTACT our chairs Pejman Mowlaee, Mario Huemer, and Philipp Salletmayr at show_and_tell@interspeech2019.org.

Back  Top

3-1-5(2019-09-15) INTERSPEECH 2019: Call for Survey Presentations

NEW! INTERSPEECH 2019: Call for Survey Presentations NEW!

Important Dates

Proposal submission deadline: Friday May 3, 2019
Notification of selection: Monday, June 17, 2019

Survey Presentations

Interspeech is the annual flagship conference of the International Speech Communication Association (ISCA), which brings together a truly interdisciplinary group of experts from academia and industry to present and discuss the latest research, technology advances and scientific discoveries in a five-day event. As such Interspeech constantly innovates and adapts. Beyond plenary talks, oral and poster presentations recent years have seen new ideas on how to engage with experts and industry.

The 20th edition of Interspeech conferences, to take place in Graz, Austria, will introduce a range of new presentation formats. Given the complexity of speech communication science and technology the need for detailed technical review of sub-areas of research has become more critical than ever.

We invite proposals for innovative and engaging Research Survey Presentations. The talks are aimed to be scheduled at the start of suitable oral presentation sessions, and will be allocated a 40-minute time slot for presentation and discussion. Presentations should aim to give an overview of the state of the art for a specific topic covered by one or more of the main technical areas of Interspeech 2019, namely

  1. Speech Perception, Production and Acquisition

  2. Phonetics, Phonology, and Prosody

  3. Analysis of Paralinguistics in Speech and Language

  4. Speaker and Language Identification

  5. Analysis of Speech and Audio Signals

  6. Speech Coding and Enhancement

  7. Speech Synthesis and Spoken Language Generation

  8. Speech Recognition — Signal Processing, Acoustic Modeling, Robustness, Adaptation

  9. Speech Recognition — Architecture, Search, and Linguistic Components

  10. Speech Recognition — Technologies and Systems for New Applications

  11. Spoken Dialog Systems and Analysis of Conversation

  12. Spoken Language Processing — Translation, Information Retrieval, Summarization, Resources and Evaluation

Proposals for Survey Presentations are required to include

  • The presenter’s name, title, and short bio (100 words), and (if applicable) a list of co-contributors.

  • Title, an outline and a description of the proposed talk (max 500 words). The description should include statements on the relevance of the talk, and the potential target audience.

  • A current CV of the presenter including a list of publications

Proposals will be evaluated by the technical programme and organising committees for relevance and significance, taking balance across areas and the available presentation slots into account (maximum 10 presentations). The presenters of the Interspeech 2019 survey talks will be invited to submit survey papers to the ISCA supported journals of Computer, Speech and Language and Speech Communication with the aim to be included in a Special Issue on the State of the Art in Speech Science and Technology.

Survey presentation proposers are invited to submit a proposal via email to the Technical Program Chairs: tpc-chairs@interspeech2019.org no later than Friday May 3, 2019. Please do not hesitate to contact the technical chairs for any questions that may arise prior to proposal submission.  Notification of selection is scheduled for June 17, 2019.



Thomas Hain and Björn Schuller

INTERSPEECH 2019 Technical Program Chairs

Back  Top

3-1-6(2019-09-15) INTERSPEECH 2019: Call for Tutorials , Graz, Austria (updated)

Call for Tutorials at INTERSPEECH 2019

INTERSPEECH conferences are attended by researchers with a long-term track-record in speech sciences and technology, as well as by early-stage researchers or researchers interested in a new domain within the INTERSPEECH areas. An important part of the conference are the tutorials to be held on the first day of the conference, September 15, 2019. Presented by speakers with long and deep expertise in speech, they will provide their audience with a rich learning experience and an exposure to longstanding research problems, contemporary topics of research as well as emerging areas.

We encourage proposals for addressing fundamental or advanced topics in an introductory style as well as for targeting experienced researchers who want to dig deeper into a given new topic.

Tutorials, each of three-hour duration, shall introduce an emerging area of speech-related research, or present an overview of an established area of research, rather than focus on the presenter’s individual research.

Date and Venue of the Tutorials

September 15, 2019; two 3h sessions, in the morning and in the afternoon, in the main conference location (Messecongress Graz).

Proposals Should Include (in this Order)

Tutorial title

Presenter(s) (name and affiliation)

Contact information (email, telephone)

3-4 sentence abstract summarizing the proposed tutorial that could be used as an advertisement

Description of the proposal (1-2 pages description plus a few relevant references and any webpages/material useful for reviewing the proposal)

Explanation of relevance of the proposed tutorial (0.5 – 1 page)

Tutorial logistics, including

  • Duration (1 session = 3 hours = 2 x 90 minutes plus refreshment break)

  • Description of presentation format (e.g., one or more presenters etc.)

  • Equipment required for the tutorial

  • Preference for type of accompanying material (handouts, storage devices with media, etc.)

 

Presenter information

  • Biography of presenter(s)

  • Key publications of presenter(s) on the tutorial topic

  • List of previous tutorial experience

 

Audience information

  • Target audience (e.g., new researchers to the field, research students, specialists of adjacent fields, etc.)

  • Other considerations/ comments

  • Bibliography (from description)

Submission Procedure

Proposals for INTERSPEECH 2019 tutorials may be no more than 4 pages long and must conform to the format stated above; please use clear headings to indicate each point. Proposals should be submitted by email to tutorials@interspeech2019.org by February 1February 25 (extended!), 2019 and notification of accepted proposals will be given by March 5, 2019

By submitting a proposal, the presenter(s) understand the ISCA policy of strongly encouraging video-recording of the tutorials for education purposes if the proposal is accepted. Access to recording materials will be given via ISCA Video Archives.

 

To access the online call for tutorials, visit http://interspeech2019.org/calls/tutorials.

Questions? Please Contact: tutorials@interspeech2019.org

The tutorial chairs:

Barbara Schuppler, Graz University of Technology

Florian Metze, Carnegie Mellon University, Pittsburgh

Dorothea Kolossa, Ruhr University Bochum

Back  Top

3-1-7(2019-09-15) INTERSPEECH 2019: Satellite Workshops (updated)

(updated)INTERSPEECH 2019: Satellite Workshops

 


Several excellent initiatives organize satellite workshops around INTERSPEECH 2019 and you can find a list of those approved by ISCA at https://www.interspeech2019.org/program/satellite_events/. Please consider contributing both to the main conference and to these important satellite events.

 

 

 



Back  Top

3-1-8(2019-09-15) INTERSPEECH 2019: Special Sessions & Challenges (updated)

INTERSPEECH 2019: Call for Special Sessions & Challenges

 

We are happy to announce the approval of an impressive list of 15 special sessions and challenges to be found at https://www.interspeech2019.org/program/special_sessions_and_challenges/. You should have a look soon as several of the challenges require substantial work before preparing the paper submission which will undergo the same review procedures like regular paper submisssions

Back  Top

3-1-9(2020-09-14) Interspeech 2020 Shanghai, China
 

 

Interspeech 2020
Shanghai, China, September 14-18, 2020
Chair : Helen Meng
21th INTERSPEECH event

Back  Top

3-1-10(2021-08-30) Interspeech 2021, Brno, Czech Republic

INTERSPEECH 2021
Brno, Czech Republic, August 30 - September 3, 2021
Chairs : Hynek Hermansky and Honza Cernocky 
22nd INTERSPEECH event

Back  Top

3-2 ISCA Supported Events
3-2-1(2019-05-15) CfP 10th Christian Benoit award

 Tenth Christian Benoît Award conferred by the International Speech Communication Association  and the Association Francophone de la Communication Parlée
 
Deadline May 15, 2019

 
The Christian Benoît Award has been conferred periodically by the Association Christian Benoit until 2017. In 2018, the oversight of this Award was assumed  by ISCA and sponsored jointly by both ISCA and AFCP. It is awarded through a competitive nomination and review process to promising young scientists in the domain of SPEECH COMMUNICATION to further their career in the field. The focus of the award’s research topic may emphasize basic science or applied research projects.
 
The Award provides the elected scientist with financial support for the development of a personal short-term research project that:  (1) illustrates concretely the achievements of her/his research work; (2) could help in promoting his/her work in the scientific community, institutions and grant agencies in their geographical region/country; (3) gives an overall view of the state of the art in the particular research domain.  The proposed research project can take on the form of a demonstration, a technical product/system, or of a pedagogical multi-media product (movie, website, interactive software…).
 
The Award is valued at 7,500 Euros(*)
 
The commitments of the elected scientist who receives the award are: -- to attend the Interspeech-2019 Conference in Graz, Austria -- to deliver the final product of the project within 2 years -- to present her/his results at a future ISCA-endorsed event, specific to the research domain of the applicant.
 
In the application, the candidate should provide -- a statement of research interests (2 pages max), -- a detailed curriculum vitae including a list of the most relevant publications for the project. -- a description of the proposed short-term research project (5 to 15 pages max). The description should include a presentation of the scientific and/or pedagogical objectives and the methodological aspects, a link to the former research work of the applicant, and a detailed description of the provisional budget.
 
Applications will be evaluated by an international committee including experts in the field of Speech Communication and representatives of the institutions supporting the award.
 
Applications should be sent to gerard.bailly@gipsa-lab.fr before Monday May 15, 2019 Electronic submissions are mandatory
 
The successful candidate will be notified by June 15, 2019. The Award will be conferred at the Interspeech-2019 Conference in Graz, Austria (www.interspeech2019.org)
 
-------------------------------------------------------------------------------------------

* 3,500 Euros will be given immediately; with the remaining 4,000 Euros  available on reception of the multimedia project by the chair of the ISCA Award committee. Travel and registration costs necessary to attend the Interspeech 2019 Conference will have to be paid from this grant.
** For details about the Association Christian Benoît and the past awardees of the Christian Benoît Award see http://www.gipsa-lab.fr/acb/

 

 

 

 

 

 

 

 

 

 











 

Back  Top

3-2-2(2019-09-11) CfP SIGDIAL 2019 CONFERENCE, Stockholm, Sweden

FIRST CALL FOR PAPERS

SIGDIAL 2019 CONFERENCE

September 11-13, 2019

http://www.sigdial.org/workshops/conference20/



The 20th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2019) will be held on September 11-13, 2019 at the KTH Royal Institute of Technology in Stockholm, Sweden.

SIGDIAL will be temporally co-located with Interspeech 2019, which will be held on September 15-19 in Graz, Austria (https://www.interspeech2019.org)

The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in discourse and dialogue to both academic and industry researchers. Continuing a series of nineteen successful previous meetings, this conference spans the research interest areas of discourse and dialogue. The conference is sponsored by the SIGdial organization, which serves as the Special Interest Group in discourse and dialogue for both ACL and ISCA.

Keynote Speakers:  Dan Bohus, Mirella Lapata, Helen Meng

TOPICS OF INTEREST

 We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:

  • Discourse Processing
    Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval.

  • Dialogue Systems
    Open domain, task oriented dialogue and chat systems. Knowledge graphs and dialogue. Dialogue state tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Spoken, multi-modal, embedded, situated, and text/web based dialogue systems, their components, evaluation and applications.

  • Corpora, Tools and Methodology
    Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.

  • Pragmatic or Semantic Modeling
    Pragmatics or semantics of discourse and dialogue (i.e., beyond a single sentence).

  • Applications of Dialogue and Discourse Processing Technology

SPECIAL SESSIONS

SIGDIAL 2019 will include two special sessions (TBA).

Please see the individual special session pages (TBA) for additional information and submission details. In order for papers submitted to special sessions to appear in the SIGDIAL conference proceedings, they must undergo the regular SIGDIAL review process.

SUBMISSIONS

The program committee welcomes the submission of long papers, short papers and demo descriptions. Papers submitted as long papers may be accepted as long papers for oral presentation or long papers for poster presentation. Accepted short papers will be presented as posters.

  • Long papers must be no longer than eight pages, including title, text, figures and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices containing sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  • Short papers should be no longer than four pages including title, text, figures and tables. An unlimited number of pages is allowed for references. One additional page is allowed for sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  • Demo descriptions should be no longer than four pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.

Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.

Multiple Submissions
Papers that have been or will be submitted to other meetings or publications must provide this information (see submission link). SIGDIAL 2019 cannot accept work for publication or presentation that will be (or has been) published elsewhere. Any questions regarding submissions can be sent to program-chairs[at]sigdial.org.

Blind Review
Building on previous year’s move to anonymous long and short paper submissions, SIGDIAL 2019 will follow the ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.

Submission Format
All long, short, and demonstration submissions must follow the two-column ACL format. Authors are expected to use the LaTeX or Microsoft Word style template from the ACL conference. Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

Submission Link and Deadlines
Authors have to fill in the submission form in the START system and upload a pdf of their paper before the May 19 deadline. Updates of a final pdf file will be permitted until May 19, 23:59 GMT.

https://www.softconf.com/j/sigdial2019/

IMPORTANT NOTE: ADOPTION OF ACL AUTHOR GUIDELINES 

As noted above, SIGDIAL 2019 is adopting the ACL guidelines for submission and citation for long and short papers. Long and short papers that do not conform to the following guidelines will be rejected without review. 

Preserving Double Blind Review
The following rules and guidelines are meant to protect the integrity of the double-blind reviewing process and ensure that submissions are reviewed fairly. The rules make reference to the anonymity period, which runs from 1 month before the submission deadline up to the date when your paper is either accepted, rejected or withdrawn.

  • You may not make a non-anonymized version of your paper available online to the general community (for example, via a preprint server) during the anonymity period. By a version of a paper we understand another paper having essentially the same scientific content but possibly differing in minor details (including title and structure) or in length (e.g., an abstract is a version of the paper that it summarizes).

  • If you have posted a non-anonymized version of your paper online before the start of the anonymity period, you may submit an anonymized version to the conference. The submitted version must not refer to the non-anonymized version, and you must inform the program chair(s) that a non-anonymized version exists. You may not update the non-anonymized version during the anonymity period, and we ask that you do not advertise it on social media or take other actions that would further compromise double-blind reviewing during the anonymity period.

  • Note that, while you are not prohibited from making a non-anonymous version available online before the start of the anonymity period, this does make double-blind reviewing more difficult to maintain, and we therefore encourage you to wait until the end of the anonymity period if possible.

Citations and Comparison
If you are aware of previous research that appears sound and is relevant to your work, you should cite it even if it has not been peer-reviewed, and certainly if it influenced your own work. However, refereed publications take priority over unpublished work reported in preprints. Specifically:

  • You are expected to cite all refereed publications relevant to your submission, but you may be excused for not knowing about unpublished work (especially work that has been recently posted or is not widely cited).

  • In cases where a preprint has been superseded by a refereed publication, the refereed publication should be cited in addition to or instead of the preprint version.

Papers (whether refereed or not) appearing less than three months before the submission deadline are considered contemporaneous to your submission, and you are therefore not obliged to make detailed comparisons that require additional experimentation or in-depth analysis.

MENTORING

Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.

BEST PAPER AWARDS

In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2019 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.

IMPORTANT DATES

  • Long, Short & Demonstration PDF Submission: 19 May 2019 (23:59, GMT)

  • Long, Short & Demonstration Paper Notification: 28 June 2019

  • Final Paper Submission: 21 July 2019 (23:59, GMT)

  • Conference: 11-13 September, 2019

SIGDIAL 2019 ORGANIZING COMMITTEE

General Chair:

Satoshi Nakamura, Nara Institute of Technology, Japan

 

Program Chairs:

Ingrid Zukerman, Monash University, Australia

Milica Gasic, Saarland University, Germany

 

Local Chair:

Gabriel Skantze, KTH Royal Institute of Technology, Sweden

 

Sponsorship Chair:

Mikio Nakano, Honda Research Institute Japan, Japan

 

Mentoring Chair:

Alexandros Papangelis, Uber AI, USA

 

Publication Chair

Stefan Ultes, Daimler AG, Germany

 

Publicity Chair:

Koichiro Yoshino, Nara Institute of Science and Technology, Japan

 

SIGdial President:

Jason Williams, Apple, USA

 

SIGdial Vice President:

Kallirroi Georgila, University of Southern California, USA

 

SIGdial Secretary:

Vikram Ramanarayanan, Educational Testing Service (ETS) Research, USA

 

SIGdial Treasurer:

Ethan Selfridge, Interactions, USA

 

SIGdial President Emeritus:

Amanda Stent, Bloomberg, USA

Back  Top

3-2-3(2019-09-11) CfSS SIGDIAL 2019, Stockholm, Sweden

SIGDIAL 2019

11?13 September 2019, Stockholm, Sweden

 

Call for Special Sessions

http://workshops.sigdial.org/conference20

 

Special Session Submission Deadline: January 28, 2019 Special Session Notification: February 18, 2019

 

The Special Interest Group on Discourse and Dialogue (SIGDIAL) organizers welcome the submission of special session proposals.

A SIGDIAL special session is the length of a regular session at the conference, and may be organized as a poster session, a panel session, a poster session with panel discussion, or an oral presentation session.

Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions.

 

The papers submitted to special sessions are handled by the special session organizers, but for the submitted papers to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers. The reviewers for the special session papers will be taken from the SIGDIAL program committee itself, taking into account the suggestions of the session organizers, and the program chairs will make acceptance decisions. In other words, special session organizers decide what appears in the session, while the program chairs decide what  appears in the proceedings and the rest of the conference program.

 

We welcome special session proposals on any topic of interest to the discourse and dialogue communities. Topics of interest include, but are not limited to Explainable AI, Evaluation, Annotation, and End?to?end systems.

 

Submissions:

Those wishing to organize a special session should prepare a two-page proposal containing: a summary of the topic of the special session; a list of organizers and sponsors; a list of people who may submit and participate in the session; and a requested format (poster/panel/oral  session).

 

These proposals should be sent to conference[at]sigdial.org by the special session proposal deadline. Special session proposals will be reviewed jointly by the general chair and program co?chairs.

 

Links:

Those wishing to propose a special session may want to look at some of the sessions organized at recent SIGDIAL meetings.

http://www.sigdial.org/workshops/conference19/sessions.htm

https://robodial.github.io/

http://articulab.hcii.cs.cmu.edu/sigdial2016/

 

SIGDIAL 2019 Organizing Committee

 

General Chair:

Satoshi Nakamura, Nara Institute of Science and Technology, Japan

 

Program Chairs:

Milica Ga?i?, Cambridge University, UK

Ingrid Zukerman, Monash University, Australia

 

Local Chair:

Gabriel Skantze, KTH, Sweden

 

Sponsorship Chair:

Mikio Nakano, Honda Research Institute Japan, Japan

 

Mentoring Chair:

Alex Papangelis, Uber AI, USA

 

Publication Chair:

Stefan Ultes, Daimler AG, Germany

 

Publicity Chair:

Koichiro Yoshino, Nara Institute of Science and Technology, Japan

 

SIGdial President:

Jason Williams, Apple, USA

 

SIGdial Vice President:

Kallirroi Georgila, University of Southern California, USA

 

SIGdial Secretary:

Vikram Ramanarayanan, Educational Testing Service (ETS) Research, USA

 

SIGdial Treasurer:

Ethan Selfridge, Interactions, USA

 

SIGdial President Emeritus:

Amanda Stent, Bloomberg, USA

Back  Top

3-2-4(2019-09-13) HSCR19 - The 3rd International Workshop on the HISTORY OF SPEECH COMMUNICATION RESEARCH, Vienna, Austria

HSCR19 - The 3rd International Workshop on the HISTORY OF SPEECH COMMUNICATION RESEARCH

13-14 September, 2019 in Vienna, Austria

<https://hscr19.kfs.oeaw.ac.at>

 

CALL FOR PAPERS

The aim of this workshop is to bring together researchers that are interested in historical aspects of all areas of speech communication research (SCR) with a focus on the interdisciplinary nature of the different fields of research.

A special interest of the 2019 workshop is on the relation between science and technology that can be exemplified through the history of SCR ? including methods from the 20th century that are no longer the state-of-the-art but of historical relevance. Interesting questions in this respect are: How can knowledge transfers between science and technology be exemplified by the history of SCR? What is the relation between SCR and artistic practices? How was speech communication research influenced by the medical sciences?

Like the past HSCR workshops that were held in 2015 in Dresden and 2017 in Helsinki this workshop is a satellite event of the INTERSPEECH conference which will be held in Graz, Austria <https://www.interspeech2019.org/>, a meeting that advertises with the Austro-Hungarian speech communication pioneer Wolfgang von Kempelen.

Invited speaker is Peter Donhauser from the Institute for Media Archeology in Vienna.

 

Important Dates:

Full paper submission: May 24, 2019

Notification of acceptance: July 1, 2019

Camera-ready paper submission: July 19, 2019

Workshop: September 13-14, 2019

 

The proceedings will be published in the book series 'Studientexte zur Sprachkommunikation' at TUDpress (Technical University Dresden).

Workshop organisation:

Michael Pucher, Acoustics Research Institute, Vienna, Austria

Juergen Trouvain, Saarland University, Saarbruecken, Germany

Carina Lozo, Acoustics Research Institute, Vienna, Austria

Back  Top

3-2-5(2019-09-20) SSW10 - The 10th ISCA Speech Synthesis Workshop, Vienna, Austria
 
Call for Papers
 
SSW10 - The 10th ISCA Speech Synthesis Workshop
20-22 September 2019
Vienna, Austria
 
 
The 10th ISCA Speech Synthesis Workshop will be held in Vienna, Austria, 20-22 September 2019. The workshop is a satellite event of the INTERSPEECH 2019 conference in Graz, Austria.
 
Confirmed invited speakers
Aäron van den Oord (Google DeepMind, UK)
Claire Gardent (CNRS, France)
 
Workshop topics
Papers in all areas of speech synthesis technology are encouraged to be submitted, including but not limited to:
 
Grapheme-to-phoneme conversion for synthesis
Text processing for speech synthesis (text normalization, syntactic and semantic analysis)
Segmental-level and/or concatenative synthesis
Signal processing/statistical model for synthesis
Speech synthesis paradigms and methods; articulatory synthesis, parametric synthesis etc.
Prosody modeling and generation
Expression, emotion and personality generation
Voice conversion and modification, morphing
Concept-to-speech conversion speech synthesis in dialog systems
Avatars and talking faces
Cross-lingual and multilingual aspects for synthesis
Applications of synthesis technologies to communication disorders
TTS for embedded devices and computational issues
Tools and data for speech synthesis
Quality assessment/evaluation metrics in synthesis
Singing synthesis
Synthesis of non-human vocalisations
End-to-end text-to-speech synthesis
Direct speech waveform modelling and generation
Speech synthesis using non-ideal data (?found?, user-contributed, etc.)
Natural language generation for speech synthesis
Special topic: Synthesis of non-standard language varieties (sociolects, dialects, second language varieties)
 
Call for Demos
We are planning to have a demo session to showcase new developments in speech synthesis. If you have some demonstrations of your work that does not really fit in a regular oral or poster presentation, please let us know.
 
The workshop program will consist of a single track with invited talks, oral and poster presentations. Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references. Papers can be submitted via the website http://ssw10.oeaw.ac.at.
 
 
Important dates:
Deadline for paper submission: May 10th, 2019
Final deadline for paper submission: May 17th, 2019
Notification of acceptance: July 1st, 2019
Camera-ready final versions: July 19th, 2019
Workshop: 20-22 September 2019
Blizzard Challenge Workshop 2019: 23. September
 
We are looking forward to seeing you in Vienna.
Sincerely,
The SSW organising committee (Michael Pucher, Junichi Yamagishi, Sebastian Le Maguer, Christian Kaseß, Friedrich Neubarth)
Back  Top

3-2-6(2019-09-20) The 8th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2019), Graz, Austria

Event: The 8th ISCA Workshop on Speech and Language Technology in Education (SLaTE 2019)

Location: Graz, Austria

Dates: September 20 - 21, 2019

Website: https://sites.google.com/view/slate2019

 

Back  Top

3-3 Other Events
3-3-1(2019-02-22) ASVspoof 2019 CHALLENGE: Future horizons in spoofed/fake audio detection

*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

ASVspoof 2019 CHALLENGE:
Future horizons in spoofed/fake audio detection
http://www.asvspoof.org/ 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

Can you distinguish computer-generated or replayed speech from authentic/bona fide speech? Are you able to design algorithms to detect spoofs/fakes automatically?

Are you concerned with the security of voice-driven interfaces?

Are you searching for new challenges in machine learning and signal processing?

 

Join ASVspoof 2019 ? the effort to develop next-generation countermeasures for the automatic detection of spoofed/fake audio. In combining the forces of leading research institutes and industry, ASVspoof 2019 encompasses two separate sub-challenges in logical and physical access control, and provides a common database of the most advanced spoofing attacks to date. The aim is to study both the limits and opportunities of spoofing countermeasures in the context of automatic speaker verification and fake audio detection.


CHALLENGE TASK

Given a short audio clip, determine whether it represents authentic/bona fide human speech, or a spoof/fake (replay, synthesized speech or converted voice). You will be provided with a large database of labelled training and development data and will develop machine learning and signal processing countermeasures to distinguish automatically between the two. Countermeasure performance will be evaluated jointly with an automatic speaker verification (ASV) system provided by the organisers.

 

BACKGROUND:
The ASVspoof 2019 challenge follows on from two previous ASVspoof challenges, held in 2015 and 2017. The 2015 edition focused on spoofed speech generated with text-to-speech (TTS) and voice conversion (VC) technologies. The 2017 edition focused on replay spoofing. The 2019 edition is the first to address all three forms attack and the latest, cutting-edge spoofing attack technology.

 

ADVANCES:

Today?s state-of-the-art, TTS and VC technologies produce speech signals that are as good as perceptually indistinguishable from bona fide speech. The LOGICAL ACCESS sub-challenge aims to determine whether the advances in TTS and VC pose a greater threat to the reliability of automatic speaker verification and spoofing countermeasure technologies. The PHYSICAL ACCESS sub-challenge builds upon the 2017 edition with a far more controlled evaluation setup which extends the focus of ASVspoof to fake audio detection in, e.g. the manipulation of voice-driven interfaces (smart speakers).

 

METRICS:

The 2019 edition also adopts a new metric, the tandem detection cost function (t-DCF). Adoption of the t-DCF metric aligns ASVspoof more closely to the field of ASV. The challenge nonetheless focuses on the development of standalone spoofing countermeasures; participation in ASVspoof 2019 does NOT require any expertise in ASV. The equal error rate (EER) used in previous editions remains as a secondary metric, supporting the wider implications of ASVspoof involving fake audio detection.

SCHEDULE:

Training and development data release: 19th December 2018

Participant registration deadline: 8th February 2019

Evaluation data release: 15th February 2019

Deadline to submit evaluation scores: 22nd February 2019

Organisers return results to participants: 15th March 2019

INTERSPEECH paper submission deadline: 29th March 2019

 

REGISTRATION:

Registration should be performed once only for each participating entity and by sending an email to registration@asvspoof.org with ?ASVspoof 2019 registration? as the subject line. The mail body should include: (i) the name of the team; (ii) the name of the contact person; (iii) their country; (iv) their status (academic/non-academic), and (v) the challenge scenario(s) for which they wish to participate (indicative only). Data download links will be communicated to registered contact persons only.

 

MAILING LIST:

Subscribe to general mailing list by sending e-mail with subject line ?subscribe asvspoof2019? to sympa@asvspoof.org. To post messages to the mailing list itself, send e-mails to asvspoof2019@asvspoof.org

 

ORGANIZERS*:

Junichi Yamagishi, NII, Japan & Univ. of Edinburgh, UK

Massimiliano Todisco, EURECOM, France

Md Sahidullah, Inria, France

Héctor Delgado, EURECOM, France

Xin Wang, National Institute of Informatics, Japan

Nicholas Evans, EURECOM, France

Tomi Kinnunen, University of Eastern Finland, Finland

Kong Aik Lee, NEC, JAPAN

Ville Vestman, University of Eastern Finland, Finland

(*) Equal contribution

CONTRIBUTORS:

University of Edinburgh, UK; Nagoya University, Japan, University of Science and Technology of China, China; iFlytek Research, China; Saarland University / DFKI GmbH, Germany; Trinity College Dublin, Ireland; NTT Communication Science Laboratories, Japan; HOYA, Japan; Google LLC (Text-to-Speech team, Google Brain team, Deepmind); University of Avignon, France; Aalto University, Finland; University of Eastern Finland, Finland; EURECOM, France.


FURTHER INFORMATION:

info@asvspoof.org

 
 
 
 
Back  Top

3-3-2(2019-03-06) 30th Conference on Electronic Speech Signal Processing (ESSV) 2019, Dresden, Germany

CALL FOR PAPERS:

30th Conference on Electronic Speech Signal Processing (ESSV) 2019

Venue: TU Dresden, Germany

Web: www.essv.de/essv2019

ORGANIZERS:

Peter Birkholz, Simon Stone (TU Dresden, Germany)

CONFERENCE TOPICS:

* Speech recognition and natural language understanding
* Speech synthesis and natural language generation
* Measuring, processing, and modeling articulation
* Phonetic, syntactic, semantic, and pragmatic aspects in technical applications
* Multimodal dialog systems
* Models of speech acquisition
* Acoustic and visual pattern recognition
* Musical, biological, and technical signals ? applications and processing
* Applications in medical, healthcare, and rehabilitation technologies
* Cognitive and neural systems
* Speech technology in industrial and home environments

KEYNOTE SPEAKERS:

* Ercan Altinsoy
* Sidney Fels
* Christian Herbst
* Jose Gonzalez
* Korin Richmond

IMPORTANT DATES:

* Conference: March 6 - 8, 2019
* Submission deadline for extended abstract (1 page): November 30, 2018
* Notification of acceptance: until December 14, 2018
* Camera-ready paper submission deadline: January 25, 2019

CONFERENCE LANGUAGES:

German and English

Back  Top

3-3-3(2019-03-25) 13th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS (LATA 2019), Saint Petersburg, Russia (updated)

13th INTERNATIONAL CONFERENCE ON LANGUAGE AND AUTOMATA THEORY AND APPLICATIONS
 

LATA 2019
 
Saint Petersburg, Russia
 
March 25-29, 2019
 
Organized by:
           
Saint Petersburg State University
and
Institute for Research Development, Training and Advice, Brussels/London
 
http://lata2019.irdta.eu/
*************************************************************************
 
AIMS:
 
LATA is a conference series on theoretical computer science and its applications. LATA 2019 will reserve significant room for young scholars at the beginning of their career. It will aim at attracting contributions from classical theory fields as well as application areas.
 
VENUE:
 
LATA 2019 will take place in Saint Petersburg, whose historic centre is a UNESCO World Heritage Site. The conference site shall be the historical Twelve Collegia building (https://en.wikipedia.org/wiki/Twelve_Collegia), built in ca. 1740,
which was used for the Russian government in the 18th century, and which
has been the main building of Saint Petersburg State University since 1835.
 
SCOPE:
 
Topics of either theoretical or applied interest include, but are not limited to:
 
algebraic language theory
algorithms for semi-structured data mining
algorithms on automata and words
automata and logic
automata for system analysis and programme verification
automata networks
automatic structures
codes
combinatorics on words
computational complexity
concurrency and Petri nets
data and image compression
descriptional complexity
foundations of finite state technology
foundations of XML
grammars (Chomsky hierarchy, contextual, unification, categorial, etc.)
grammatical inference and algorithmic learning
graphs and graph transformation
language varieties and semigroups
language-based cryptography
mathematical and logical foundations of programming methodologies
parallel and regulated rewriting
parsing
patterns
power series
string processing algorithms
symbolic dynamics
term rewriting
transducers
trees, tree languages and tree automata
weighted automata
 
STRUCTURE:
 
LATA 2019 will consist of:
 
invited talks
peer-reviewed contributions
 
INVITED SPEAKERS:
 
Henning Fernau (University of Trier), Modern Aspects of Complexity within Formal Languages
 
Pawe? Gawrychowski (University of Wroc?aw), tba
 
Edward A. Lee (University of California, Berkeley), Observation, Interaction, Determinism, and Free Will
 
Vadim Lozin (University of Warwick), From Words to Graphs, and Back
 
Esko Ukkonen (University of Helsinki), Pattern Discovery in Biological Sequences
 
PROGRAMME COMMITTEE:
 
Krishnendu Chatterjee (Institute of Science and Technology Austria, AT)
Bruno Courcelle (University of Bordeaux, FR)
Manfred Droste (University of Leipzig, DE)
Travis Gagie (Diego Portales University, CL)
Peter Habermehl (Paris Diderot University, FR)
Tero Harju (University of Turku, FI)
Markus Holzer (University of Giessen, DE)
Radu Iosif (Verimag, FR)
Kazuo Iwama (Kyoto University, JP)
Juhani Karhumäki (University of Turku, FI)
Lila Kari (University of Waterloo, CA)
Juha Kärkkäinen (University of Helsinki, FI)
Bakhadyr Khoussainov (University of Auckland, NZ)
Sergey Kitaev (University of Strathclyde, UK)
Shmuel Tomi Klein (Bar-Ilan University, IL)
Olga Kouchnarenko (University of Franche-Comté, FR)
Thierry Lecroq (University of Rouen, FR)
Markus Lohrey (University of Siegen, DE)
Sebastian Maneth (University of Bremen, DE)
Carlos Martín-Vide (Rovira i Virgili University, ES, chair)
Giancarlo Mauri (University of Milano-Bicocca, IT)
Filippo Mignosi (University of L'Aquila, IT)
Victor Mitrana (Polytechnic University of Madrid, ES)
Joachim Niehren (INRIA Lille, FR)
Alexander Okhotin (Saint Petersburg State University, RU)
Dominique Perrin (University of Paris-Est, FR)
Matteo Pradella (Polytechnic University of Milan, IT)
Jean-François Raskin (Université Libre de Bruxelles, BE)
Marco Roveri (Bruno Kessler Foundation, IT)
Karen Rudie (Queen's University, CA)
Wojciech Rytter (University of Warsaw, PL)
Kai Salomaa (Queen's University, CA)
Sven Schewe (University of Liverpool, UK)
Helmut Seidl (Technical University of Munich, DE)
Ayumi Shinohara (Tohoku University, JP)
Hans Ulrich Simon (Ruhr-University of Bochum, DE)
William F. Smyth (McMaster University, CA)
Frank Stephan (National University of Singapore, SG)
Martin Sulzmann (Karlsruhe University of Applied Sciences, DE)
Jorma Tarhio (Aalto University, FI)
Stefano Tonetta (Bruno Kessler Foundation, IT)
Rob van Glabbeek (Data61, CSIRO, AU)
Margus Veanes (Microsoft Research, US)
Mahesh Viswanathan (University of Illinois, Urbana-Champaign, US)
Mikhail Volkov (Ural Federal University, RU)
Fang Yu (National Chengchi University, TW)
Hans Zantema (Eindhoven University of Technology, NL)
 
ORGANIZING COMMITTEE:
 
Alexander Okhotin (Saint Petersburg, co-chair)
Manuel Parra-Royón (Granada)
Dana Shapira (Ariel)
David Silva (London, co-chair)
 
SUBMISSIONS:
 
Authors are invited to submit non-anonymized papers in English presenting original and unpublished research. Papers should not exceed 12 single-spaced pages (all included) and should be prepared according to the standard format for Springer Verlag's LNCS series (see http://www.springer.com/computer/lncs?SGWID=0-164-6-793341-0). If necessary, exceptionally authors are allowed to provide missing proofs in a clearly marked appendix.
 
Submissions have to be uploaded to:
 
https://easychair.org/conferences/?conf=lata2019
 
PUBLICATIONS:
 
A volume of proceedings published by Springer in the LNCS series will be available by the time of the conference.
 
A special issue of a major journal will be later published containing peer-reviewed substantially extended versions of some of the papers contributed to the conference. Submissions to it will be by invitation.
 
REGISTRATION:
 
The registration form can be found at:
 
http://lata2019.irdta.eu/Registration.php
 
DEADLINES (all at 23:59 CET):
 
Paper submission: November 18, 2018
Notification of paper acceptance or rejection: December 16, 2018
Final version of the paper for the LNCS proceedings: December 23, 2018
Early registration: December 23, 2018
Late registration: March 11, 2019
Submission to the journal special issue: June 29, 2019
 
QUESTIONS AND FURTHER INFORMATION:
 
david (at) irdta.eu

Back  Top

3-3-4(2019-03-28) The Second (2019) IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR'19); San José, CA, USA

The Second (2019) IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR'19)

http://www.ieee-mipr.org
San Jose, CA, USA
March 28-30, 2019

New forms of multimedia data (such as text, numbers, tags, networking, signals,
geo-tagged information, graphs/relationships, 3D/VR/AR and sensor data, etc.)
has emerged in many applications in addition to traditional multimedia data
(image, video, audio). Multimedia has become the biggest of big data as
the foundation of today's data-driven discoveries. Almost all disciplines of
science and engineering, as well as social sciences, involve multimedia data
in some forms, such as recording experiments, driverless cars, unmanned aerial
vehicles, smart communities, biomedical instruments, security surveillance.
Some recent events demonstrate the power of real-time broadcast of unfolding
events on social networks. Multimedia data is not just big in volume, but also
multi-modal and mostly unstructured. Storing, indexing, searching, integrating,
and recognizing from the vast amounts of data create unprecedented challenges.
Even though significant progress has been made processing multimedia data,
today's solutions are inadequate in handling data from millions of sources
simultaneously.

The IEEE International Conference on Multimedia Information
Processing and Retrieval (IEEE-MIPR) aims to provide a forum for original research
contributions and practical system design, implementation, and applications
of multimedia information processing and retrieval for single modality or
multiple modalities. The target audiences will be university researchers,
scientists, industry practitioners, software engineers, and graduate students
who need to become acquainted with technologies for big data analytics, machine
intelligence, information fusion in multimedia information processing and retrieval.
A collection of keynotes, open panels, and workshops will be held, together
with paper/poster sessions.

The conference will accept regular papers (6 pages), short papers (4 pages),
and demo papers (2 pages). Authors are encouraged to compare their approaches,
qualitatively or quantitatively, with existing work and explain the strength
and weakness of the new approaches. Selected submissions will be invited to submit
to journal special issues.

The conference includes (but not limited) the following topics of multimedia
data processing and retrieval.

Multimedia Retrieval
  * Multimedia Search and Recommendation
  * Web-Scale Retrieval
  * Relevance Feedback, Active/Transfer Learning
  * 3D and sensor data retrieval
  * Multimodal Media (images, videos, texts, graph/relationship) Retrieval
  * High-Level Semantic Multimedia Features

Machine Learning/Deep Learning/Data Mining
  * Deep Learning in Multimedia Data and / or Multimodal Fusion
  * Deep Cross-Learning for Novel Features and Feature Selection
  * High-Performance Deep Learning (Theories and Infrastructures)
  * Spatio-Temporal Data Mining

Content Understanding and Analytics
  * Multimodal/Multisensor Integration and Analysis
  * Effective and Scalable Solution for Big Data Integration
  * Affective and Perceptual Multimedia
  * Multimedia/Multimodal Interaction Interfaces with humans

Multimedia and Vision
  * Multimedia Telepresence and Virtual/Augmented/Mixed Reality
  * Visual Concept Detection
  * Object Detection and Tracking
  * 3D Modeling, Reconstruction, and Interactive Applications

Systems and Infrastructures
  * Multimedia Systems and Middleware
  * Telepresence and Virtual/Augmented/Mixed Reality
  * Software Infrastructure for Data Analytics
  * Distributed Multimedia Systems and Cloud Computing

Data Management
  * Multimedia Data Collections, Modeling, Indexing, or Storage
  * Data Integrity, Security, Protection, Privacy
  * Standards and Policies for Data Management

Novel Applications
  * Multimedia applications for health and sports
  * Multimedia applications for culture and education
  * Multimedia applications for fashion and living
  * Multimedia applications for security and safety
  * Any other novel applications

Internet of Multimedia Things
  * Real-Time Data Processing
  * Autonomous Systems such as Driverless Cars, Robots, and Drones
  * Mobile and Wearable Multimedia

Important Dates:
===============
  * Workshop proposals: September 15, 2018
  * Workshop notification: October 1, 2018
  * Paper submission: October 1, 2018
  * Notification of acceptance: November 20, 2018
  * Camera ready due: January 20, 2019 
  * Author registration due:  January 20, 2019


General Co-Chairs:
===============
Mohan Kankanhalli, National University of Singapore, Singapore
Rainer Lienhart, Universitat Augsburg, Germany
Chengcui Zhang, University of Alabama, USA


Program Co-Chairs:
===============
Min Chen, University of Washington, USA
Leonel Sousa, Universidade de Lisboa, Portugal
Guan-Ming Su, Dolby Labs, USA
Yonghong Tian, Beijing University, China

--

Back  Top

3-3-5(2019-04-24) CfP IWSDS 2019 Special Session -- Dialogue systems and lifelong learning, Siracusa, Sicily, Italy

Call for Papers

IWSDS 2019 Special Session -- Dialogue systems and lifelong learning
April 24-26, 2019
Siracusa, Sicily, Italy

* DSLL description

The topic of dialogue systems and chatbots has been gaining renewed interest in the recent years, particularly thanks to the recent development of deep neural networks. Nevertheless, most of the proposed approaches require a very large amounts of data, which is difficult to obtain when talking about dialogue. Proposing Methods that fill the data gap will allow data-driven dialogue systems trained in a specific domain and task to improve over time, and even learning in a cumulative way new domains or tasks is of great interest to fill the data gap. This direction of research is called lifelong learning or continuous learning. From another angle, another research paradigm that allows for continuous learning is to design systems are able to learn a new task or domain through interaction as a student with a teacher could do.

The main obective of this special session  is to gather researchers interested in dialogue systems that interact with the users in order to learn about new domains or acquire new knowledge.

We invite submissions on all aspects of dialogue systems, lifelong  and interactive learning.

Topics include but are not limited to:

- Dialogue systems that improve over time
- Intelligent systems that use interaction to gather new information
- Specific techniques that can enable learning through interaction, such as online reinforcement learning, imitation learning, etc.
- Corpora for interactive learning with dialogue
- Demonstration of systems that learn through interaction
- Evaluation methodologies

* Session Committe

Eneko Agirre, University of the Basque Country, Spain
Mark Cieliebak, Zurich University of Applied Sciences, Switzerland
Olivier Galibert, LNE, France
Sahar Ghannay, LIMSI, Univ. Paris Sud, France
Arantxa Otegi, University of the Basque Country, Spain
Anselmo Peñas, Universidad Nacional de Educación a Distancia, Spain
Camille Pradel, Synapse Développement
Sophie Rosset, LIMSI, CNRS, France
Anne Vilnat, LIMSI, Univ. Paris Sud, France

* Important dates

Submission paper: January 15
Author notification: January 25
Camera ready: February 15

For paper submission process, please check the IWSDS 2019 website <https://iwsds2019.unikore.it/> and the submission paper website <https://easychair.org/conferences/?conf=iwsds2019>

* Background

Artificial intelligence has made significant advances on solving prediction and dialogue tasks. But most of the approaches are based on off-line and supervised learning, where  algorithms take as input annotated data and build a model. Further work is necessary to build autonomous agents which are capable of learning from the environment and the interactions, without explicit supervision for each new task. The goal of Lifelong Learning (LL, also known as Learning to Learn) is to research methods for continuous learning of various tasks over time and learning commonalities among them (Chen and Liu, 2018). Current LL systems exploit similarities between the learned models for past tasks using task meta-features (Eaton and Ruvolo, 2013) and corresponding methods to learn representations of tasks, using for instance neural networks and ensembles of learners. Still, LL assumes that manual annotations exist for each item to be learned, while autonomous agents rarely have access to!
  such supervision. In a realistic scenario the agent receives feedback only after completing a complex task comprising of several decisions, and needs to guess which of the decisions were correct or incorrect.

Current interactions between humans and computers are limited to constrained dialogues, where dialogue systems (aka ChatBots or Conversational Agents?) are trained on a number of annotated sample dialogues of a narrow domain. The development cost is considerable, both in building the representation of the knowledge for the target domain and in the dialogue management proper, where one of the most important shortcomings is the variability of human language and the large amount of background knowledge that needs to be shared for effective dialogue. In addition, most of the learned knowledge needs to be learned nearly from scratch for each new dialogue task, including both the domain knowledge (learned using knowledge induction or knowledge bases) and the dialogue management module (adapted to the new domain). Interestingly, humans use dialogue to improve their own knowledge of a domain. That is, people interact with other people in order to confirm, retract or refine their u!
 nderstanding. This topic of learning through dialogue is an emerging one with more and more attempts to propose framework and tasks to evaluate such system. Most recent work in this area concern learning through conversation where the supervision part is given by the user feedback (Weston, 2016), the way the learning system can ask questions in an online reinforcement learning framework (Li et al., 2017) and also on how to learn and infer new knowledge during a dialogue (Mazumder et al., 2018 ; Letard et al., 2016).

This special session will focus on methods and evaluation methodologies for learning through dialogue. All aspects involved in dialogue (natural language understanding, dialogue management, natural language generation, knowledge management) are of interest.

This special session will provide a focal point for the growing research community on interactive learning with and by dialogue.

** References


Z. Chen, and B. Liu. Lifelong Machine Learning (2nd Edition). Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan and Claypool Publishers. August 2018, 207p

E. Eaton and P. L. Ruvolo. 2013. ELLA: An efficient lifelong learning algorithm. In ICML 2013.

Sahisnu Mazumder, Nianzu Ma, Bing Liu. Towards a Continuous Knowledge Learning Engine for Chatbots. arXiv:1802.06024 [cs.CL], 16 Feb. 2018.

Jiwei Li, Alexander H. Miller, Sumit Chopra, Marc'Aurelio Ranzato, Jason Weston, Learning through Dialogue Interactions by Asking Questions, ICLR 2017.

Jason E. Weston. Dialog-based language learning. NIPS 2016.

Vincent Letard, Sophie Rosset, Gabriel Illouz. Incremental Learning From Scratch Using Analogical Reasoning. ICTAI 2016.

Back  Top

3-3-6(2019-04-24) CfP IWSDS 2019: International Workshop on Spoken Dialogue Systems Technology, Syracuse, Sicily, Italy

CALL FOR PAPERS
IWSDS 2019: International Workshop on Spoken Dialogue 
Systems Technology
Place: Siracusa, Sicily, Italy
Main Conference Dates: April 24-26, 2019
Web site: https://iwsds2019.unikore.it/

https://easychair.org/cfp/IWSDS2019 

http://www.wikicfp.com/cfp/servlet/event.showcfp?eventid=80840&copyownerid=128361 

========================================

The  INTERNATIONAL WORKSHOP ON SPOKEN DIALOGUE SYSTEMS 
TECHNOLOGY (IWSDS) 2019 invites paper submissions in any 
topic related to the main conference theme 'Increasing 
naturalness and flexibility in spoken dialogue 
interaction:'


* Dialogue systems and reasoning
* Machine learning methods for spoken dialogue systems
* Multi-party and multi-lingual dialogue systems
* Open and multi domain systems
* Engagement and emotion in human-robot interactions
* Spoken dialog systems for low-resource languages
* Big data and large scale spoken dialogue systems
* Domain Transfer and adaptation techniques for spoken 
dialog systems
* Spoken dialogue systems applications
* Connecting spoken dialogue systems to global AI
* Personalized conversational agents
* Human-Robot dialogue systems
* Resources for creating dialogue systems
* Multimodal dialogue systems
* Reasoning and Q&A through dialogue interactions

However, submissions are not limited to these topics, and 
submission of papers in all areas of spoken dialogue 
systems is encouraged. We particularly welcome papers that 
can be illustrated by a demonstration, and will organize 
the conference in order to best accommodate these papers, 
whatever their category.

As usual, a selection of accepted papers will be published 
in a book by Springer following the conference (Springer 
LNEE series, SCOPUS and other important indexes).

Authors are requested to submit PDF files of their 
manuscripts using the paper submission system: 
https://easychair.org/conferences/?conf=iwsds2019

We distinguish between the following categories of 
submissions:

* Long Research Papers are reserved for reports on mature 
research results. The expected length of a long paper 
should be in the range of 8-12 pages, including 
references.
* Short Research Papers should be in the range of 4-6 
pages, including references. Authors may choose this 
category if they wish to report on smaller case studies or 
ongoing but interesting and original research efforts.
* Position Papers deal with novel research ideas or 
view-points which describe trends or fruitful starting 
points for future research and elicit discussion and are 
not much researched. They should be 2 pages long, 
excluding references.
* Demo Submissions - System Papers: Authors who wish to 
demonstrate their system may choose this category and 
provide a description of their system and demo. System 
papers should not exceed 6 pages in total.

IWSDS 2019 requires that all authors wishing to present a 
paper take into account:

* The paper is substantially original and will not be 
submitted to any other conference or journal during the 
IWSDS 2019 review period.
* The paper does not contain any plagiarism.
* The paper will be presented by one of the authors 
in-person at the conference site according to the schedule 
published. Any paper accepted in the technical program, 
but not presented on-site will be withdrawn from the 
official conference proceedings.

----------------------------------------------------
Important dates:

Paper submission deadline: December 10, 2018
Acceptance/rejection notification: January 25, 2019
Camera-ready paper due: February 15, 2019
Early bird registration deadline: February 26, 2019
Conference dates: April 24-26, 2019
----------------------------------------------------

Templates for formatting are available on the conference 
website: https://iwsds2019.unikore.it

* Latex Style and Template: 
https://iwsds2019.unikore.it/resources/svmult.zip
* Word Template: 
https://iwsds2019.unikore.it/resources/T1-book.zip
* Requirements for submitting figures that are acceptable: 
https://iwsds2019.unikore.it/resources/Art_Guidelines.pdf

For more information, you can visit the official website 
of the conference: https://iwsds2019.unikore.it/


General Chairs
   Sabato Marco Siniscalchi
   Haizhou Li (Curtesy Assistant)

Technical Program Chairs
    Sandro Cumani
    Valerio Mario Salerno

Back  Top

3-3-7(2019-04-24) European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2019), Bruges, Belgium

ESANN 2019: European Symposium on Artificial Neural Networks,

Computational Intelligence and Machine Learning

Bruges, Belgium, 24-25-26 April 2019 

http://www.esann.org/

 

Call for papers

 

The call for papers is available at http://www.esann.org/.  The deadline for submitting papers is November 19, 2018.

 

The ESANN conferences cover machine learning, artificial neural networks, statistical information processing and computational intelligence. Mathematical foundations, algorithms and tools, and applications are covered.  In addition to regular sessions, 7 special sessions will be organized on the following topics:

- Streaming data analysis, concept drift and analysis of dynamic data sets

- Embeddings and Representation Learning for Structured Data

- Parallel and Distributed Machine Learning: Theory and Applications

- Societal Issues in Machine Learning: When Learning from Data is Not Enough

- Reliable Machine Learning

- Statistical physics of learning and inference

- 60 Years of Weightless Neural Systems

 

ESANN 2019 builds upon a successful series of conferences organized each year since 1993. ESANN has become a major scientific event in the machine learning, computational intelligence and artificial neural networks fields over the years.

 

The conference will be organized in Bruges, one of the most beautiful medieval towns in Europe. Designated as the 'Venice of the North', the city has preserved all the charms of the medieval heritage. Its center, which is inscribed on the Unesco World Heritage list, is in itself a real open air museum.

 

We hope to receive your submission to ESANN 2019 and to see you in Bruges next year!

 

-----------------------------------------------------

If you do not wish to receive mailings about ESANN conferences, please fill the following form and send it to esann@uclouvain.be (do not reply to this e-mail!):

Name: ....................

First Name: ....................

E-mail address: ....................

 

Tick one of the following boxes:

O I do not wish to receive any mailing (both printed and e-mails) about the ESANN conference

O I do not wish to receive emails about the ESANN conference, but I accept to receive the printed versions of the call for papers and program (maximum 2 postal mails per year)

O I do not wish to receive postal mails about the ESANN conference, but I accept to receive e-mails (maximum 3 per year)

-----------------------------------------------------

 

 

 

 

========================================================

ESANN - European Symposium on Artificial Neural Networks,

Computational Intelligence and Machine Learning

http://www.esann.org/

 

* For submissions of papers, reviews, registrations:

Michel Verleysen

Univ. Cath. de Louvain - Machine Learning Group

Back  Top

3-3-8(2019-04-24) Workshop on Chatbots and Conversational Agent Technologies & Dialogue Breakdown Detection Challenge @ IWSDS 2019, Siracusa, Sicily, Italy
Workshop on Chatbots and Conversational Agent Technologies & Dialogue Breakdown Detection Challenge @ IWSDS 2019 (https://iwsds2019.unikore.it/)

24-26 April, 2019, Siracusa, Sicily, Italy
 
Workshop Description
 
Although chat-oriented dialogue systems have been around for many years (almost fifty years indeed, if we consider Weizenbaum's Eliza system as the starting milestone), they have been recently gaining a lot of popularity in both research and commercial arenas. From the commercial stand point, chat-oriented dialogue seems to be providing an excellent means to engage users for entertainment purposes, as well as to give a more human-like appearance to established vertical goal-oriented dialogue systems.
 
From the research perspective, on the other hand, this kind of systems poses interesting challenges and problems to the research community. The main objective of the workshop is to bring together researchers working on problems related to chat-oriented dialogue for promoting discussion and knowledge sharing about the state-of-the-art and novel techniques in this field, as well as to coordinate a collaborative effort to collect/generate data, resources and evaluation protocols for future research in this area.
 
Topics of Interest
 
This workshop invites original research contributions on all aspects of chat-oriented dialogue, including closely related areas such as knowledge representation and reasoning, language generation, and natural language understanding, among others. In this sense the workshop will invite for both long and short paper submissions in areas including (but not restricted to):
 
?Chat-oriented dialogue systems
?Data collections and resources
?Information extraction
?Natural language understanding
?General domain knowledge representation
?Common sense and reasoning
?Natural language generation
?Emotion detection and generation
?Sense of humor detection and generation
?Chat-oriented dialogue evaluation
?User studies and system evaluation
?Multimodal human-computer interaction
 
Paper Format and Submissions
 
Paper submissions to WOCHAT should follow the IWSDS 2019 paper submission policy: single-blind review and in Springer LNCS format (https://iwsds2019.unikore.it/call-for-paper). 
 
Prospective authors are invited to submit full papers (up to 12 pages) or short papers (up to 8 pages). Paper submissions must be done in electronic format through the IWSDS 2019 Conference Submission page (https://easychair.org/account/signin.cgi?key=81335471.oDm5m3UuCGJG2I8K) where you must select 'WOCHAT' under the available submission categories.
 
Important Dates
 
?January 15, 2019: Paper Submission Deadline
?January 25, 2019: Paper Acceptance Notification
?February 15, 2019: Camera Ready Version Deadline
?April 2019: WOCHAT @ IWSDS 2019 in Sicily, Italy
Back  Top

3-3-9(2019-05-12) 2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), Brighton, UK

2019 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)
12-17 May 2019 ? Brighton, UK
Special Session Proposal Deadline: 20 August 2018
Tutorial Proposal Deadline: 22 October 2018
Paper Submissions Deadline: 29 October 2018
Signal Processing Letters Deadline: 14 January 2019
Sponsored by IEEE SPS

Back  Top

3-3-10(2019-05-14) 8èmes Journées de Phonétique Clinique, Mons, Belgique
JPC2019 - 8èmes Journées de Phonétique Clinique

Mons, 14-16 mai 2019

Site web: http://langage.be/JPC/indexjpc.html

 

 

Deuxième appel à communications

 

Depuis leur création en 2005, les Journées de Phonétique Clinique ont été régulièrement organisées sur une base bisannuelle. Le plus souvent françaises (localisées à Paris en 2005 et 2017, à Grenoble en 2007, à Aix-en-Provence en 2009, à Strasbourg en 2011 et à Montpellier en 2015), elles seront belges en 2019, comme déjà elles le furent à la faveur de leur organisation à Liège en 2013. C?est en effet le Laboratoire de phonétique de l?Université de Mons (sous l?égide de l?Institut de Recherche en Sciences et Technologies du Langage) qui accueillera la manifestation du 14 au 16 mai 2019.

 

Rencontre scientifique internationale, les Journées de Phonétique Clinique s?intéressent aux questions concernant le fonctionnement normal et pathologique de la voix, de la parole et du langage. Elles s?adressent à une communauté scientifique multidisciplinaire où se croisent chercheurs, ingénieurs, médecins, orthophonistes, logopèdes, logopédistes et toutes autres formes de « speech therapists ». Elles convoquent des questions relevant tant de la médecine que de la psychologie, de la linguistique et de manière générale, de la plupart des domaines se rattachant aux sciences du langage.

 

Les JPC se veulent un espace d?échange marqué du sceau de la convivialité et du respect des différences interindividuelles où professionnels, scientifiques confirmés et jeunes chercheurs se sentent libres de présenter dans un esprit d?ouverture leurs réflexions et travaux aboutis ou en cours, que ceux-ci reposent sur l?exploitation de données empiriques, l?élaboration de modèles ou l?analyse d?applications cliniques, concernant tant le sujet sain que le sujet pathologique.

 

Lors de cette huitième édition, deux conférences plénières seront organisées: 

- Prof. Pascale Tremblay, Faculté de Médecine, Université Laval & Laboratoire des neurosciences de la parole et de l?audition, Centre de Recherche CERVO, Québec: 
«  Vieillissement de la parole et de la voix chez les chanteurs et les non-chanteurs » 
- Prof. Virginie Woisard, CHU ? Institut Universitaire du Cancer de Toulouse (en collaboration avec Jérôme Farinas, Institut de Recherche en Informatique de Toulouse, et Corine Astésano, Laboratoire Octogone-Lordat, Université de Toulouse):
«  Intelligibilité de la parole et qualité de vie. Réflexions à partir des résultats de l?étude 'carcinologic speech severity index' ». 
 

Les deux premières journées (14-15 mai) seront consacrées à la conférence proprement dite. La troisième journée (16 mai) s'organisera autour d'ateliers thématiques (p.ex. outils de traitement automatique de la parole pathologique, troubles phonétiques vs. troubles phonologiques, etc.) et d'un salon de la logopédie ouvert aux étudiants et aux professionnels. Un appel à l?organisation des ateliers a été lancé: http://www.langage.be/JPC/AppelAteliers.html

 

Les propositions de communication (résumé de 400 mots, hors titre, auteurs et références) porteront sur les problématiques suivantes (liste non exhaustive):

- Perturbations du système oro-pharyngo-laryngé

- Parole et perturbations des systèmes perceptifs, auditifs et visuels

- Troubles cognitifs et moteurs de la parole et du langage

- Modélisation de la parole et de la voix pathologiques

- Évaluation fonctionnelle du langage, de la parole et de la voix

- Diagnostic et traitement des troubles/pathologies de la parole et de la voix parlée et chantée

- Instrumentation et ressources en phonétique clinique

- Bilinguisme et développement tout au long de la vie

- Etc.

  

Dates importantes: 

- 01 février 2019 : Soumission des résumés via Easychair: https://easychair.org/conferences/?conf=jpc2019

- 15 mars 2019 : Notification aux auteurs

- 01 avril - 01 mai 2019 : Inscriptions au tarif réduit

- 01 mai 2019 : Version finale des abstracts

- 14-16 mai 2019 : Conférence


Nous nous réjouissons de vous accueillir à Mons en mai prochain!
Pour le comité d?organisation,
Véronique Delvaux

 
Prof. Véronique Delvaux, PhD
Chercheur qualifié FNRS à l'UMONS
Chargée de cours UMONS & ULB
Service de Métrologie et Sciences du Langage
Local ?1.7, Place du Parc, 18, 7000 Mons
+3265373140
Back  Top

3-3-11(2019-05-16) CfSS JPC 2019, Mons, Belgique

 

Appel Ateliers: JPC 2019

16 mai 2019

Université de Mons, Belgique

 


Objectifs

Afin d?encourager le dialogue et les rencontres entre cliniciens et chercheurs, le comité d?organisation des JPC 2019 lance un appel à l?organisation d?ateliers transversaux, ciblés sur une thématique particulière. Ces ateliers seront co-organisés par au moins un(e) clinicien(ne) et un(e) chercheur(se) sous une forme ouverte et participative, p.ex. table ronde, débat, tutoriel, session de démonstration, séminaires...

 

Quatre ateliers prendront place la matinée du jeudi 16 mai 2019 lors de la dernière journée des 8e Journées de Phonétique Clinique. Ils disposeront d?un créneau de 3 heures au maximum. L?organisation des ateliers sera confiée aux responsables des projets d?atelier acceptés. Ceux-ci seront chargés d?assurer la sollicitation des intervenants ou l?appel à communications, la définition du programme scientifique et la communication avec les participants à l?atelier. Ils devront s'assurer de l'inscription à l?atelier des participants, inscrits ou non-inscrits aux JPC. Les organisateurs des JPC prendront en charge la partie logistique des ateliers (gestion des salles, pauses café et diffusion des abstracts dans le livre des résumés de la conférence). Les invitations dans le cadre de ces ateliers ne seront pas prises en charge par l?organisation des JPC 2019.

 

Calendrier

?  Date des ateliers : jeudi 16 mai 2019

?  Date limite de soumission de la proposition : 18 janvier 2019

?  Réponse du comité d'organisation des JPC : 25 janvier 2019

 

Modalités de proposition

Les propositions d'ateliers (2 pages maximum) comprendront les éléments suivants :

?       Titre de l'atelier

?       Nom, prénom, affiliation, adresse électronique des responsables de l?atelier (au moins un(e) clinicien(ne) et un(e) chercheur(se))

?       Nom, prénom et coordonnées de la personne de contact pour la communication avec les organisateurs

?       Une description synthétique de la thématique de l'atelier

?       Le format envisagé (table ronde, débat, tutoriel, session de démonstration, séminaires, etc.)

?       Comité scientifique

?       Participants invités ou attendus (nombre, profil: enseignants, chercheurs, cliniciens, étudiants, etc.)

 

Les propositions seront envoyées en format pdf à l?adresse suivante: JPC2019@umons.ac.be  

Elles seront soumises à l?avis du comité de programme des JPC 2019.

 

Publication dans le livret des résumés

Le format souhaité suivra les recommandations des JPC 2019 (résumé de maximum 400 mots en français (hors titre, auteurs, et bibliographie)

 

Contact et informations

JPC2019@umons.ac.be  

www.langage.be/JPC/indexjpc.html 

 

 

Back  Top

3-3-12(2019-05-16) CfWorkshops JPC 2019, Mons, Belgique

****************************

Appel Ateliers: JPC 2019

16 mai 2019

Université de Mons, Belgique

 

****************************


Objectifs

Afin d?encourager le dialogue et les rencontres entre cliniciens et chercheurs, le comité d?organisation des JPC 2019 lance un appel à l?organisation d?ateliers transversaux, ciblés sur une thématique particulière. Ces ateliers seront co-organisés par au moins un(e) clinicien(ne) et un(e) chercheur(se) sous une forme ouverte et participative, p.ex. table ronde, débat, tutoriel, session de démonstration, séminaires...

 

Quatre ateliers prendront place la matinée du jeudi 16 mai 2019 lors de la dernière journée des 8e Journées de Phonétique Clinique. Ils disposeront d?un créneau de 3 heures au maximum. L?organisation des ateliers sera confiée aux responsables des projets d?atelier acceptés. Ceux-ci seront chargés d?assurer la sollicitation des intervenants ou l?appel à communications, la définition du programme scientifique et la communication avec les participants à l?atelier. Ils devront s'assurer de l'inscription à l?atelier des participants, inscrits ou non-inscrits aux JPC. Les organisateurs des JPC prendront en charge la partie logistique des ateliers (gestion des salles, pauses café et diffusion des abstracts dans le livre des résumés de la conférence). Les invitations dans le cadre de ces ateliers ne seront pas prises en charge par l?organisation des JPC 2019.

 

Calendrier

?  Date des ateliers : jeudi 16 mai 2019

?  Date limite de soumission de la proposition : 18 janvier 2019

?  Réponse du comité d'organisation des JPC : 25 janvier 2019

 

Modalités de proposition

Les propositions d'ateliers (2 pages maximum) comprendront les éléments suivants :

?       Titre de l'atelier

?       Nom, prénom, affiliation, adresse électronique des responsables de l?atelier (au moins un(e) clinicien(ne) et un(e) chercheur(se))

?       Nom, prénom et coordonnées de la personne de contact pour la communication avec les organisateurs

?       Une description synthétique de la thématique de l'atelier

?       Le format envisagé (table ronde, débat, tutoriel, session de démonstration, séminaires, etc.)

?       Comité scientifique

?       Participants invités ou attendus (nombre, profil: enseignants, chercheurs, cliniciens, étudiants, etc.)

 

Les propositions seront envoyées en format pdf à l?adresse suivante: JPC2019@umons.ac.be  

Elles seront soumises à l?avis du comité de programme des JPC 2019.

 

Publication dans le livret des résumés

Le format souhaité suivra les recommandations des JPC 2019 (résumé de maximum 400 mots en français (hors titre, auteurs, et bibliographie)

 

Contact et informations

JPC2019@umons.ac.be  

www.langage.be/JPC/indexjpc.html 

Back  Top

3-3-13(2019-06-04) 14th PAC conference (Phonologie de l’Anglais Contemporain / Phonology of Contemporary English), Aix-en -Provence, France

Call for papers: 14th PAC conference

(Phonologie de l’Anglais Contemporain / Phonology of Contemporary English)

 

PAC AIX 2019

Phonetic and phonological variation in contemporary English:

Xperience-Xperimentation

 

Laboratoire Parole et Langage, Aix-en-Provence, France

June 4-5 2019

 

Guest Speakers : Dominic Watt (U. York)

&

Emmanuel Ferragne (U. Paris Diderot)

 

We are pleased to announce the 2019 edition of the annual PAC conference, ‘Phonetic and phonological variation in contemporary English: Xperience / Xperimentation’, due to take place from Tuesday June 4 to Wednesday June 5 2019 and, hosted by the Laboratoire Parole et Langage and Aix-Marseille University in Aix-en-Provence. We shall welcome as invited guest speakers Dominic Watt, from the University of York and Emmanuel Ferragne from  The Paris Diderot University of Paris Diderot. Both have worked on varieties of English and are currently working on forensic phonetics, among other topics.

 

The PAC programme (http://www.pacprogramme.net) gathers researchers interested in the study of variation in contemporary spoken English, adhering to a common protocol for data collection and annotation. The PAC conferences have been organized annually since 2000 and have been willing to welcome researchers studying spoken English worldwide and from a wide variety of backgrounds.

 

The 2019 edition of the conference will focus on « experience/experimentation », in French « l’expérience » (which is polysemic). People working in the framework of the PAC programme are used to following a field approach. The data collected within the framework of the PAC programme may easily be exploited by experimentalists as well. The idea is to open the conference to researchers working in a more experimental setting. We would like to make the link between the 2 two domains and our guest speakers will show that the two approaches may be complementary in the study of language. Papers concerned by either field work or experimental methods or combining the two domains are welcome. A wide range of issues can be explored, matching the research axes of the PAC programme, such as,  and among others, studies of English in urban contexts, analyses of prosodic variation, of L2 English or papers concerned with tools and annotation strategies..

The audience will consist of colleagues and students working on spoken English corpora and the presentations are all in English.

 

The deadline for sending a title with a one-page anonymous abstract (excluding references) is January 7, 2019.

 

Please visit the conference web site, where you can find a template for abstracts and upload your abstract submission: https://pacaix2019.sciencesconf.org/ (you will need to create a sciencesconf account if you don’t already have one).

 

Notification of acceptance will be sent by mid February 2019.

 

For any questions, you can contact us at the following address: pacaix2019@sciencesconf.org

 

Local Organising committee:

Julia Bongiorno

Stéphanie Desous

Sophie Herment

Joëlle Lavaud

Catherine Perrot

Claudia Pichon-Starke

Paul Sartre

Anne Tortel

Gabor Turcsan

Back  Top

3-3-14(2019-06-06) 22nd Rencontres Jeunes Chercheurs (RJC 2019 - ED 268), Paris, France

22nd Rencontres Jeunes Chercheurs (RJC 2019 - ED 268)

?Variation in linguistics: approaches, data, uses?

6th? 7thJune 2019

University Sorbonne Nouvelle - Paris 3 (Maison de la Recherche) 

4, rue des Irlandais - 75005 PARIS



Dear Colleagues,

 

The «Langage et langues : description, théorisation, transmission» Doctoral School (ED 268, University Sorbonne Nouvelle) is glad to announce the Rencontres Jeunes Chercheurs (RJC 2019). The conference will be held from June 6th- June 7th 2019 in Paris.

 

These Rencontres offer junior researchers preparing a Master?s or a Doctorate degree, as well as post-doctorate students, the opportunity to present their work in individual paper or poster sessions.

 

This year?s theme tackles the issue of:

 

?Variation in linguistics: approaches, data, uses?

We encourage proposals concerned with this topic from any linguistic discipline. Everyone who is interested in presenting an individual paper or a poster is welcome to submit a 4000 characters abstract in English or French for double-blind review by January 21st, 2019 at 7pm (Paris time UTC+1). Abstracts must be uploaded on the platform EasyChair at the following address: https://easychair.org/conferences/?conf=rjc2019

 

Individual papers will be allocated 20 minutes, and an additional 10 minutes for discussion.

The size of the posters is A0. Poster authors will be invited to give a short oral presentation of their work.

 

Agenda

Submission deadline: January 21st, 2019

Notification of acceptance: March 2019

Conference dates: June 6th ? 7th, 2019

Conference location: Maison de la Recherche

Address: 4, rue des Irlandais - 75005 PARIS

Web site: http://www.univ-paris3.fr/rencontres-jeunes-chercheurs-301310.kjsp

 

 

Please find attached the call for papers and submission guidelines (in French and in English) for the Rencontres Jeunes Chercheurs. 

 

Please, circulate widely. Many thanks in advance.

Best regards,

---

The RJC Organizing Committee

rjc-ed268@univ-paris3.fr

Back  Top

3-3-15(2019-07-02) 2nd Call for Papers ? ACM Intelligent Virtual Agents Conference- IVA2019 , Paris, France

2nd Call for Papers ? ACM Intelligent Virtual Agents Conference - IVA 2019

2-5 July 2019, Paris, France

 

https://iva2019.sciencesconf.org

 

The 19th ACM International Conference on Intelligent Virtual Agents (IVA) will be held on July 2-5 2019 in Paris, France. The conference is organized by CNRS, Sorbonne University and Paris-Saclay University (France), and sponsored by ACM-SIGAI.

The IVA conference started in 1998 as a workshop on Intelligent Virtual Environments at the European Conference on Artificial Intelligence in Brighton, UK, which was followed by a similar one in 1999 in Salford, Manchester, UK. Then dedicated stand-alone IVA conferences took place in Madrid, Spain, in 2001, Irsee, Germany, in 2003, and Kos, Greece, in 2005. Since 2006 IVA has become a full-fledged annual international event, which was first held in Marina del Rey, California, then Paris, France, in 2007, Tokyo, Japan, in 2008, Amsterdam, The Netherlands, in 2009, Philadelphia, Pennsylvania, USA, in 2010, Reykjavik, Iceland, in 2011, Santa Cruz, USA, in 2012, Edinburgh, UK, in 2013, Boston, USA, in 2014, Delft, The Netherlands, 2015, Los Angeles, USA, 2016, Stockholm, Sweden, 2017. IVA 2018 was held in Sydney, Australia.

 

PAPER SUBMISSION

We invite submissions of research full papers on a broad range of topics, including but not limited to: theoretical foundations of virtual agents, agent modeling and evaluation, agents in games and simulations, and applications of virtual agents. Extended abstracts presenting late breaking work are also welcome.

IVA 2019 is the 19th meeting of an interdisciplinary annual conference and the main leading scientific forum for presenting research on modeling, developing and evaluating Intelligent Virtual Agents (IVAs) with a focus on communicative abilities and social behavior. IVAs are interactive digital characters that exhibit human-like qualities and can communicate with humans and each other using natural human modalities like facial expressions, speech and gesture. They are capable of real-time perception, cognition, emotion and action that allow them to participate in dynamic social environments. In addition to presentations on theoretical issues, the conference encourages the showcasing of working applications.

 

IVA 2019?s special topic is ?Social Learning?, that is learning while interaction socially; agents can learn from the humans and humans can learn from the agents. Agents can take different roles such as tutors, peers, motivators, coaches in training and in serious games. They can act as job recruiter, virtual patient, and nurse to name a few applications. With this topic in mind we are seeking closer engagement with industry and also with social psychologists.

 

For more information, please visit the IVA 2019 website:

https://iva2019.sciencesconf.org

 

The papers and extended abstracts will be published in the ACM digital library. All submissions will be reviewed via a double-blind review process.

 

IMPORTANT DATES (23h59 UTC/GMT)

Full papers

Submission Deadline: March 1, 2019

Notification of Acceptance: April 8, 2019

Camera Ready: April 22, 2019

Extended abstracts

Submission Deadline: March 1, 2019

Notification of Acceptance: April 8, 2019

Camera Ready: April 22, 2019

 

INVITED SPEAKERS

Beatrice de Gelder (Maastricht University)

Rachael Jack (Glasgow University)

Verena Rieser (Heriot-Watt University)

Pierre-Yves Oudeyer (INRIA - Bordeaux)

 

COMMITTEE

Conference Chairs

Catherine Pelachaud, CNRS-ISIR, Sorbonne University, France

Jean-Claude Martin, CNRS-LIMSI, University Paris Saclay, France


Program co-chairs

Gale Lucas, USC Institute for Creative Technologies, USA

Hendrik Buschmeier, Bielefeld University, Germany

Stefan Kopp, Bielefeld University, Germany

 

SCOPE AND LIST OF TOPICS

IVA invites submissions on a broad range of topics, including but not limited to:

 

List of Topics

Socio-emotional agent models: 

  • Cognition, machine learning and adaptation

  • Emotion, personality and cultural differences

  • Model of emotionally communicative behavior

  • Model of conversational behavior

  • Model of social skills

  • Machine learning for endowing virtual agents with social skills

 

Multimodal interaction:

  • Verbal and nonverbal coordination

  • Engagement

  • Interpersonal relation

  • Multi-party interaction

  • Model driven by theoretical foundations from psychology

  • Data driven model

 

Social agent architectures:

  • Design criteria and design methodologies

  • Real-time human-agent interaction

  • Incremental agent control

  • Real-time integrated system

 

Evaluation methods and studies:

  • Evaluation methodologies and user studies

  • Ethical considerations and societal impact

  • Applicable lessons from other fields (e.g. robotics)

  • Social agents as a means to study and model human behavior

 

Applications:

  • Social skills training

  • Virtual agents in games and simulations

  • Applications in education, health, games, art

 

Social learning:

  • Learning in social interaction

  • Social skills acquisition model

  • Learning in interaction with agents

WARNING: There is a conference called ICIVA 2019 that claims to be the 21st International Conference on Intelligent Virtual Agents in Bali in October 2019. This conference is not the official IVA and is launched by an organization World Academy of Science, Engineering and Technology that is unfortunately very famous for its predatory publishing practices.

(https://en.wikipedia.org/wiki/World_Academy_of_Science,_Engineering_and_Technology)

Please note that no paper submitted to ICIVA 2019 in Bali will be published in the IVA 2019 proceedings.

Back  Top

3-3-16(2019-07-08) CfProjects eNTERFACE 2019, Bilkent University, Ankara, Turkey
Call for Projects | eNTERFACE 2019
Bilkent University, Ankara, Turkey July 8th ? August 2nd, 2019 
 
Computer Engineering Department of Bilkent University invites project proposals for eNTERFACE?19, the 15th Summer Workshop on Multimodal Interfaces, to be held in Ankara, Turkey, from July 8th ? August 2nd, 2019.

Following the success of the previous eNTERFACE workshops held in Mons (Belgium, 2005), Dubrovnik (Croatia, 2006), Istanbul (Turkey, 2007), Paris (France, 2008), Genova (Italy, 2009), Amsterdam (Netherlands, 2010), Plzen (Czech Republic, 2011), Metz (France, 2012), Lisbon (Portugal, 2013), Bilbao (Spain, 2014), Mons (Belgium, 2015), Twente (Netherlands, 2016), Porto (Portugal, 2017), and Louvain-la-Neuve (Belgium, 2018), eNTERFACE?19 aims at continuing and enhancing the tradition of collaborative, localized research and development work by gathering, in a single place, leading researchers in multimodal interfaces and students to work on specific projects for 4 complete weeks.

Procedure

Gather a (partial) team and apply with a short project proposal on multimodal interfaces. When the proposal is accepted, a call for participants will be circulated, and people will apply to become a part of your project. You can specify desired skills in this call. You will evaluate the applicants and decide on the final team. The participants will come to the workshop for one month. There are no registration fees, and we have arranged cheap accommodation for the participants. Each participant is responsible from his or her own travel and subsistence. At the end of the workshop, project groups will prepare and present reports that will be gathered in proceedings, and extended reports will be gathered in a journal special issue (typically, Journal of Multimodal User Interfaces). You can check http://enterface.net/ for past editions, and find project reports, open code and data of past projects.

The eNTERFACE workshop is a great opportunity to bring together researchers working on an international project, for testing new ideas, for integrating modules, for collecting new datasets, and for meeting new people. There will be some invited lectures and many social activities during the workshop.

Topics
This year?s special topics will be deep learning for behavior analysis and reinforcement learning. There will be seminars around those topics during the workshop.

Although not exhaustive, the submitted projects can cover one or several of the topics listed below:
- Art and Technology
- Affective Computing
- Assistive and Rehabilitation Technologies
- Assistive Technologies for Education and Social Inclusion
- Augmented Reality
- Conversational Embodied Agents
- Health Informatics
- Human Behavior Analysis
- Human Robot Interaction
- Interactive Playgrounds
- Innovative Musical Interfaces
- Interactive Systems for Artistic Applications
- Mixed Reality
- Multimodal Interaction, Signal Analysis and Synthesis
- Multimodal Spoken Dialog Systems
- Search in Multimedia and Multilingual Documents
- Serious Games
- Smart Spaces and Environments
- Social Signal Processing
- Tangible and Gesture Interfaces
- Teleoperation and Telerobotics
- Wearable Technology
- Virtual Reality

Important Dates
January 31st   Reception of a 1-page Notification of Interest, with a summary of projects goals, tentative work packages, and deliverables


February 15th:  End of Call for Projects: Reception of the complete Project proposal
February 20th:  Notification of acceptance to project leaders and call for participation
March 29th:  End of the call for participation
April 7th:  Notification of acceptance to participants
April 30th:  Finalizing team building
July 8th - August 2nd:  eNTERFACE Workshop

Proposals should be submitted to the organizers Hamdi Dibeklio?lu (dibeklioglu@cs.bilkent.edu.tr) and Elif Sürer (elifs@metu.edu.tr). They will be evaluated by eNTERFACE Steering Committee with respect to the suitability to the workshop goals and format. Authors of the accepted proposals will then be invited to build their teams
Back  Top

3-3-17(2019-07-21) The 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2019), Paris, France

 



ACM SIGIR 2019

The 42nd International ACM SIGIR Conference on

Research and Development in Information Retrieval

July 21-25, 2019, Paris, France

 

CALL FOR PAPERS

 

Call for Full Papers

The annual SIGIR conference is the major international forum for the presentation of new research results, and the demonstration of new systems and techniques, in the broad field of information retrieval (IR). The 42nd ACM SIGIR conference, to be held in Paris, France, welcomes contributions related to any aspect of information retrieval and access, including theories, foundations, algorithms, applications, evaluation, and analysis. The conference and program chairs invite those working in areas related to IR to submit original papers for review.

Important Dates  (timezone: anywhere on earth)

Full paper abstract registration deadline
  January 21, 2019
Full paper submission deadline
Full paper notifications
  January 28, 2019
  April 14, 2019

Committees

Program chairs

  • Yoelle Maarek, Amazon Research, Haifa, Israel
  • Jian-Yun Nie, University of Montreal, Canada
  • Falk Scholer, RMIT University, Melbourne, Australia

General chairs

  • Max Chevalier, CNRS &  Université Paul Sabatier, Toulouse, France
  • Eric Gaussier, CNRS & Université Joseph Fourier, Grenoble, France
  • Benjamin Piwowarski, CNRS, LIP6, Sorbonne Université, Paris, France

Contact

All questions about full paper submissions should be emailed to sigir2019-pcchairs AT easychair DOT org.

 

follow us on twitter : @sigir2019

follow us on our web site : http://sigir.org/sigir2019/

Back  Top

3-3-18(2019-07-21) The Apollo-11 speech challenge

HISTORY: On July 20, 1969 at 20:17 UTC, Earth witnessed one of the most challenging technology accomplishments by mankind to date of NASA Apollo-11 with over 600M people witnessing both the landing and first steps on the moon by Neil Armstrong and Buzz Aldrin. Next July 2019 marks the 50th Anniversary of the historical Apollo-11 lunar landing and first steps. https://en.wikipedia.org/wiki/Apollo_11

NSF CRSS-UTDallas Project: With support from the US National Science Foundation (NSF-CISE), CRSS-UTDallas has spent the last six years developing a hardware/software solution to digitize and recover all 30-track analog tapes from Apollo-11 (plus Apollo-13 and other missions), as well as development of speech diarization technologies to advance speech technology for such data. A total of 19,000 hours of data consisting of all NASA air-to-ground, mission control, and backroom support team discussions was released this year (news releases from this NSF sponsored project this year include: NSF, NASA, BBC, AIP (Acoustical Society of America), NPR, many on-line news sites, and involvement in a planned CNN documentary where this data is contributing, etc.). To date, this will be the largest publically available audio corpus of time synchronized, team based (~600 people) naturalistic communications to accomplish a real-world task.

ANNOUNCEMNT: This email is to announce the release of the FEARLESS STEPS CHALLENGE corpus, which is being shared for a proposed Special Session at ISCA INTERSPEECH-2019. The attached flyer details the 5 challenge tasks involved:

1. SAD: Speech Activity Detection

2. Speaker Diarization

3. SID: Speaker Identification

4. ASR: Automatic Speech Recognition

5. Sentiment Detection

 

This challenge corpus consists of 100hours from 5 of the 30-track channels spanning three phases of Apollo-11 mission: (i) lift off, (ii) landing, (iii) lunar walk. All data for this challenge will be available soon via a download option for all to participate (this site has sample audio from the NSF funded project: https://app.exploreapollo.org/ ). In addition, any lab/group wishing to have access to the entire 19,000 hours can do so without charge (this is public data, so it will be available via download, or a small fee for a hard-disk and shipping to your lab).   

While diarization efforts in the past have concentrated on single channel broadcast news, interviews, etc. These all represent typically a single speaker, or small group discussing topics of interest. The FEARLESS STEPS CORPUS is all time synchronized (with IRIG Time Channel) across 30 channels, with loops containing anywhere from 3-33 speakers working collaboratively to solve challenging problems. CRSS-UTDallas has produced full diarization output for the entire 19,000hrs of data (SAD, SID, DIAR/ASR) which is available with the corpus.

REQUEST:  We are proposing a Special Session at ISCA INTERSPEECH-2019. If you have interest in getting access to the FEARLESS STEPS CORPUS and potentially participating in the CHALLENGE, please reply to this email Hansen, John' <john.hansen@utdallas.edu>(an expression of interest does not obligate you to submit, we are simply trying to collect a list of interested researchers for the data). 

Many thanks for your interest!

CRSS-UTDallas Fearless Steps Team

Back  Top

3-3-19(2019-07-22) 3rd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING, Warsaw, Poland

3rd INTERNATIONAL SUMMER SCHOOL ON DEEP LEARNING
 

DeepLearn 2019
 
Warsaw, Poland
 
July 22-26, 2019
 
Co-organized by:
 
Institute of Computer Science, Polish Academy of Sciences
 
IRDTA ? Brussels/London
 
http://deeplearn2019.irdta.eu/
 
***************************************************************
 
SCOPE:
 
DeepLearn 2019 will be a research training event with a global scope aiming at updating participants about the most recent advances in the critical and fast developing area of deep learning. This is a branch of artificial intelligence covering a spectrum of current exciting machine learning research and industrial innovation that provides more efficient algorithms to deal with large-scale data in neurosciences, computer vision, speech recognition, language processing, human-computer interaction, drug discovery, biomedical informatics, healthcare, recommender systems, learning theory, robotics, games, etc. Renowned academics and industry pioneers will lecture and share their views with the audience.
 
Most deep learning subareas will be displayed, and main challenges identified through 2 keynote lectures, 24 four-hour and a half courses, and 1 round table, which will tackle the most active and promising topics. The organizers are convinced that outstanding speakers will attract the brightest and most motivated students. Interaction will be a main component of the event.
 
An open session will give participants the opportunity to present their own work in progress in 5 minutes. Moreover, there will be two special sessions with industrial and recruitment profiles.
 
ADDRESSED TO:
 
Master's students, PhD students, postdocs, and industry practitioners will be typical profiles of participants. However, there are no formal pre-requisites for attendance in terms of academic degrees. Since there will be a variety of levels, specific knowledge background may be assumed for some of the courses. Overall, DeepLearn 2019 is addressed to students, researchers and practitioners who want to keep themselves updated about recent developments and future trends. All will surely find it fruitful to listen and discuss with major researchers, industry leaders and innovators.
 
STRUCTURE:
 
3 courses will run in parallel during the whole event. Participants will be able to freely choose the courses they wish to attend as well as to move from one to another.
 
VENUE:
 
DeepLearn 2019 will take place in Warsaw, whose historical Old Town was designated a UNESCO World Heritage Site. The venue will be:
 
tba
 
KEYNOTE SPEAKERS:
 
tba
 
PROFESSORS AND COURSES: (to be completed)
 
Pierre Baldi (University of California, Irvine), [intermediate/advanced] Deep Learning: Theory, Algorithms, and Applications to the Natural Sciences
 
Christopher Bishop (Microsoft Research Cambridge), [introductory] Introduction to the Key Concepts and Techniques of Machine Learning
 
Aaron Courville (University of Montréal), [introductory/intermediate] Deep Generative Models
 
Sergei V. Gleyzer (University of Florida), [introductory/intermediate] Feature Extraction, End-end Deep Learning and Applications to Very Large Scientific Data: Rare Signal Extraction, Uncertainty Estimation and Realtime Machine Learning Applications in Software and Hardware
 
Tomas Mikolov (Facebook), [introductory] Using Neural Networks for Modeling and Representing Natural Languages (with Armand Joulin)
 
Hermann Ney (RWTH Aachen University), [intermediate/advanced] Speech Recognition and Machine Translation: From Statistical Decision Theory to Machine Learning and Deep Neural Networks
 
Navraj Pannu (GoDaddy), [introductory/intermediate] Deep Learning and Maximum Likelihood in Structural Biology
 
Jose C. Principe (University of Florida), [intermediate/advanced] Cognitive Architectures for Object Recognition in Video
 
Björn Schuller (Imperial College London), [introductory/intermediate] Deep Learning for Intelligent Signal Processing
 
Alex Smola (Amazon), tba
 
Ponnuthurai N Suganthan (Nanyang Technological University), [introductory/intermediate] Learning Algorithms for Classification, Forecasting and Visual Tracking
 
Johan Suykens (KU Leuven), [introductory/intermediate] Deep Learning, Neural Networks and Kernel Machines
 
Alexey Svyatkovskiy (Princeton University), [introductory/intermediate] From Natural Language Processing to Machine Learning on Source Code
 
Gaël Varoquaux (INRIA), [intermediate] Representation Learning in Limited Data Settings
 
OPEN SESSION:
 
An open session will collect 5-minute voluntary presentations of work in progress by participants. They should submit a half-page abstract containing title, authors, and summary of the research to david@irdta.eu by July 14, 2019.
 
INDUSTRIAL SESSION:
 
A session will be devoted to 10-minute demonstrations of practical applications of deep learning in industry. Companies interested in contributing are welcome to submit a 1-page abstract containing the program of the demonstration and the logistics needed. At least one of the people participating in the demonstration must register for the event. Expressions of interest have to be submitted to david@irdta.eu by July 14, 2019.
 
EMPLOYER SESSION:
 
Firms searching for personnel well skilled in deep learning will have a space reserved for one-to-one contacts. It is recommended to produce a 1-page .pdf leaflet with a brief description of the company and the profiles looked for, to be circulated among the participants prior to the event. At least one of the people in charge of the search must register for the event. Expressions of interest have to be submitted to david@irdta.eu by July 14, 2019.
 
ORGANIZING COMMITTEE:
 
?ukasz Kobyli?ski (Warsaw, co-chair)
Sara Morales (Brussels)
Manuel J. Parra-Royón (Granada)
David Silva (London, co-chair)
 
REGISTRATION:
 
It has to be done at
 
http://deeplearn2019.irdta.eu/registration/
 
The selection of up to 8 courses requested in the registration template is only tentative and non-binding. For the sake of organization, it will be helpful to have an estimation of the respective demand for each course. During the event, participants will be free to attend the courses they wish.
 
Since the capacity of the venue is limited, registration requests will be processed on a first come first served basis. The registration period will be closed and the on-line registration facility disabled when the capacity of the venue is exhausted. It is highly recommended to register prior to the event.
 
FEES:
 
Fees comprise access to all courses and lunches. There are several early registration deadlines. Fees depend on the registration deadline.
 
ACCOMMODATION:
 
Suggestions for accommodation will be available in due time.
 
CERTIFICATE:
 
A certificate of successful participation in the event will be delivered indicating the number of hours of lectures.
 
QUESTIONS AND FURTHER INFORMATION:
 
david@irdta.eu
 
ACKNOWLEDGMENTS:
 
Institute of Computer Science, Polish Academy of Sciences
 
Institute for Research Development, Training and Advice (IRDTA) ? Brussels/London

Back  Top

3-3-20(2019-08-04) International Conference on Phonetic Sciences, Melbourne, Australia
Music Monthly - MAY
 

Don't miss your opportunity to be a part of ICPhS 2019!


Call for papers

Authors will be invited to submit papers in December 2018 on original, unpublished research in the phonetic sciences. Papers related to the Congress themes are especially welcome, but we welcome papers related to any of the following list of scientific areas below. The submission deadline will be 4 December 2018.

 

Call for special sessions are now open

The organisers of the International Congress of Phonetic Sciences invite proposals for special sessions covering emerging topics, challenges, interdisciplinary research, or subjects that could foster useful debate in the phonetic sciences.

The ICPhS themes are ?Endangered Languages, and Major Language Varieties?. Special sessions related to these themes are especially welcome, but we are interested in proposals related to any of the scientific areas covered in the Congress. The submission deadline will be 30 April 2018.
 

 

Satellite meetings and workshops

There are opportunities for holding satellite meetings as well as workshops associated with ICPhS 2019. We invite those interested in arranging a satellite event to contact the organising committee now.

 
 

Meet our keynote speakers

The organising committee is pleased to announce the keynote speakers who will be presenting at the ICPhS 2019 Congress:

  • Professor Amalia Arvaniti
  • Professor Jonas Beskow
  • Professor Nicholas Evans
  • Professor Bryan Gick
  • Professor Lucie Menard
 
 

Scientific areas

The scientific committee have put together a list of scientific areas for the 2019 ICPhS program based on previous editions and current developments within phonetics

Please click on the button below to see the full list.

 
 

Stay in the loop!

If you would like to stay up to date with the Congress and ensure you don't miss out on any milestones, let us know by clicking the button below.

 
 


JOIN US IN MELBOURNE

Located on the south-east coast of Australia, Melbourne has been voted The World?s Most Liveable City on a number of occasions.

Melbourne is a thriving and cosmopolitan city with a unique balance of graceful old buildings and stunning new architecture surrounded by parks and gardens.

Find our more about Melbourne here.

 


CONGRESS KEY DATES

Call for special sessions proposals
Now open!
Deadline for proposals
30 April 2018
Deadline for on-line full paper submission
4 December 2018
Registration opens
Late 2018
Author notification deadline
15 February 2019
Congress Dates
4-10 August 2019

Back  Top

3-3-21(2019-08-05) ICPHS 2019 SPECIAL SESSION on Computational Approaches for Documenting and Analyzing Oral Languages, Melbourne, Australia

Presentation

http://lig-getalp.imag.fr/icphs-2019-special-session/

The special session Computational Approaches for Documenting and Analyzing Oral Languages welcomes submissions presenting innovative speech data collection methods and/or assistance for linguists and communities of speakers: methods and tools that facilitate collection, transcription and translation of primary language data. Oral languages is understood here as referring to spoken vernacular languages which depend on oral transmission, including endangered languages and (typically low-prestige) regional varieties of major languages.

The special session intends to provide up-to-date information to an audience of phoneticians about developments in machine learning that make it increasingly feasible to automate segmentation, alignment or labelling of audio recordings, even in less-documented languages. A methodological goal is to help establish the field of Computational Language Documentation and contribute to its close association with the phonetic sciences. Computational Language Documentation needs to build on the insights gained through phonetic research; conversely, research in phonetics stands to gain much from the availability of abundant and reliable data on a wider range of languages.

Papers will be submited directly to the conference by December 4th and will then be evaluated according to the standard ICPhS review process [see here]. Accepted papers will be allocated either to this special session or a general session. When submitting you can specify if you want to be considered for this special session.
 
 
Organizers

Laurent Besacier ? LIG UGA (France)
Alexis Michaud ? LACITO CNRS (France)
Martine Adda-Decker ? LPP CNRS (France)
Gilles Adda ? LIMSI CNRS (France)
Steven Bird ? CDU (Australia)
Graham Neubig ? CMU (USA)
François Pellegrino ? DDL CNRS (France)
Sakriani Sakti ? NAIST (Japan)
Mark Van de Velde ? LLACAN CNRS (France)

Endorsement

This special session is endorsed by SIGUL (Joint ELRA and ISCA Special Interest Group on Under-resourced Languages)

Back  Top

3-3-22(2019-08-20) CfP 21st International Conference on Speech and Computer (SPECOM-2019), Istambul, Turkey

*********************************************************

SPECOM-2019 - CALL FOR PAPERS

*********************************************************

 

21st International Conference on Speech and Computer (SPECOM-2019)

Venue: Istanbul, Turkey, August 20-25, 2019

Web: http://www.specom.nw.ru

 

ORGANIZERS

The conference is organized by Bogazici University (BU, Istanbul, Turkey) in cooperation with St. Petersburg Institute for Informatics and Automation of the Russian Academy of Science (SPIIRAS, St. Petersburg, Russia) and Moscow State Linguistic University (MSLU, Moscow, Russia).

 

SPECOM-2019 CO-CHAIRS

Albert Ali Salah - Bogazici University, Turkey / Utrecht University, the Netherlands

Alexey Karpov - SPIIRAS, Russia

Rodmonga Potapova - MSLU, Russia

 

INVITED SPEAKERS

Hynek Hermansky - Johns Hopkins University, USA - 'If You Can’t Beat Them, Join Them'

Odette Scharenborg - Delft University of Technology, the Netherlands - 'The representation of speech in the human and artificial brain'

Vanessa Evers - University of Twente, the Netherlands - 'Socially intelligent robotics'

 

CONFERENCE TOPICS

SPECOM conference is dedicated to issues of speech technology, human-machine interaction, machine learning and signal processing, particularly:

Affective computing

Applications for human-computer interaction

Audio-visual speech processing

Automatic language identification

Computational paralinguistics

Corpus linguistics and linguistic processing

Deep learning for sound and speech processing

Forensic speech investigations and security systems

Multichannel signal processing

Multimedia processing

Multimodal analysis and synthesis

Signal processing and feature extraction

Speaker identification and diarization

Speaker verification systems

Speech and language resources

Speech analytics and audio mining

Speech dereverberation

Speech driving systems in robotics

Speech enhancement

Speech perception and speech disorders

Speech recognition and understanding

Speech translation automatic systems

Spoken dialogue systems

Spoken language processing

Text-to-speech and Speech-to-text systems

Virtual and augmented reality

 

SATELLITE EVENT

4th International Conference on Interactive Collaborative Robotics ICR-2019: http://www.specom.nw.ru/icr2019

 

OFFICIAL LANGUAGE

The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.

 

FORMAT OF THE CONFERENCE

The conference program will include presentation of invited talks, oral presentations, and poster/demonstration sessions.

 

SUBMISSION OF PAPERS

Authors are invited to submit a full paper not exceeding 10 pages formatted in the LNCS style. Those accepted will be presented either orally or as posters. The decision on the presentation format will be based upon the recommendation of several independent reviewers. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2019

Papers submitted to SPECOM-2019 must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere.

 

PROCEEDINGS

SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major citation databases such as Web of Science, Scopus, DBLP, etc. SPECOM Proceedings are included in the list of forthcoming proceedings for August 2019.

 

IMPORTANT DATES

April 15, 2019 ............ Submission of full papers

May 15, 2019 ............ Notification of acceptance

June 01, 2019 ............ Camera-ready papers and early registration

Aug. 20-25, 2019 ......... Conference dates

 

VENUE

The conference will be organized at the at the Bogazici University, South campus, Albert Long Hall.

 

CONTACTS

All correspondence regarding the conference should be addressed to:

SPECOM-2019 Secretariat:

E-mails: specom@iias.spb.su; salah@boun.edu.tr

SPECOM-2019 web-site: http://www.specom.nw.ru

Back  Top

3-3-23(2019-08-24) 2019 Jelinek Summer Workshop on Speech and Language Technology , Montreal, Canada

2019 Jelinek Summer Workshop on Speech and Language Technology


We are pleased to invite one page research proposals for a workshop on Machine Learning for Speech and Language Technology at ÉTS (École de Technologie Supérieure) in Montreal, CA June 24 to August 2, 2019 (Tentative)

CALL FOR PROPOSALS Deadline: Monday, November 5th, 2018.
 
One-page proposals are invited for the annual Frederick Jelinek Memorial Workshop in Speech and Language Technology. Proposals should aim to advance the state of the art in any of the various fields of Human Language Technology (HLT) or related areas of Machine Intelligence, including Computer Vision and Healthcare. Proposals may address emerging topics or long-standing problems. Areas of interest in 2019 include but are not limited to: * SPEECH TECHNOLOGY: Any aspect of information extraction from speech signals; techniques that generalize in spite of very limited amounts of training data and/or which are robust to input signal variations ; techniques for processing of speech in harsh environments, etc.
 * NATURAL LANGUAGE PROCESSING: Knowledge discovery from text; new approaches to traditional problems such as syntactic/semantic/pragmatic analysis, machine translation, cross - language information retrieval, summarization, etc.; domain adaptation; integrated language and social analysis; etc.
 * MULTIMODAL HLT: Joint models of text or speech with sensory data; grounded language learning; applications such as visual question - a nswering, video summarization, sign language technology, multimedia retrieval, analysis of printed or handwritten text. * DIALOG AND LANGUAGE UNDERSTANDING: U n d e r s t a n din g h u m a n - to - h u m a n o r h u m a n - to - computer conversation; dialog manag ement; naturalness of dialog (e.g. sentiment analysis).
 * LANGUAGE AND HEALTHCARE: information extraction from electronic health records; speech and language technology in health monitoring; healthcare delivery in hospitals or the home, public health, etc.
 These workshops are a continuation of the Johns Hopkins University CLSP summer workshop series, and will be hosted by various partner universities on a rotating basis. The research topics selected for investigation by teams in past workshops should serve as good examples for prospective proposers: http://www.clsp.jhu.edu/workshops/. An independent panel of experts will screen all received proposals for suitability. Results of this screening will be communicated by November 9th, 2018. Authors passing this initial screening will be invited to an interactive peer-review meeting in Baltimore on December 7-9th, 2018. Proposals will be revised at this meeting to address any outstanding concerns or new ideas. Two or three research topics and the teams to tackle them will be selected at this meeting for the 2019 workshop. We attempt to bring the best researchers to the workshop to collaboratively pursue research on the selected topics. Each topic brings together a diverse team of researchers and students. Authors of successful proposals typically lead these teams. Other senior participants come from academia, industry and government. Graduate student participants familiar with the field are selected in accordance with their demonstrated performance. Undergraduate participants, selected through a national search, are rising star seniors: new to the field and showing outstanding academic promise. If you are interested in participating in the 2019 Summer Workshop we ask that you submit a one-page research proposal for consideration, detailing the problem to be addressed. If a topic in your area of interest
is chosen as one of the topics to be pursued next summer, we expect you to be available to participate in the six - week workshop . We are not asking for an ironclad commitment at this juncture, just a good faith commitment that if a project in your area of interest is chosen, you will actively pursue it. We in turn will make a good faith effort to accommodate any personal/logistical needs to make your six-week participation possible.
 
Proposals must be submitted to jsalt2019-planning@jhu.edu by 23:59PM EDT on Monday, 11/05/2018. 

Back  Top

3-3-24(2019-09-02) 27th European Signal Processing Conference (EUSIPCO 2019), La Coruña, Spain

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
                       EUSIPCO 2019
          27th European Signal Processing Conference
                     A Coruña, Spain
                   September 2-6, 2019
                   www.eusipco2019.org
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

IMPORTANT DATES

- Satellite Workshop Proposals ? February 4, 2019
- Full Paper Submission ? February 18, 2019
- Notification of Acceptance ? May 17, 2019
- Final Manuscript Submission - May 31, 2019

The 2019 European Signal Processing Conference (EUSIPCO) will be held in the charming
city of A Coruña, Spain, from September 2 to September 6, 2019. This flagship conference
of the European Association for Signal Processing (EURASIP) will feature a comprehensive
technical program addressing all the latest developments in research and technology for
signal processing. EUSIPCO 2019 will feature world-class speakers, oral and poster
sessions, plenaries, exhibitions, demonstrations, tutorials, and satellite workshops, and
is expected to attract many leading researchers and industry figures from all over the
world.


TECHNICAL SCOPE

We invite the submission of original, unpublished technical papers on topics including
but not limited to:

- Audio and acoustic signal processing
- Speech and language processing
- Image and video processing
- Multimedia signal processing
- Signal processing theory and methods
- Sensor array and multichannel signal processing
- Signal processing for communications
- Radar and sonar signal processing
- Signal processing over graphs and networks
- Nonlinear signal processing
- Statistical signal processing
- Compressed sensing and sparse modelling
- Optimization methods
- Machine learning
- Bio-medical image and signal processing
- Signal processing for computer vision and robotics
- Computational imaging /spectral imaging
- Information forensics and security
- Signal processing for power systems
- Signal processing for education
- Bioinformatics and genomics
- Signal processing for big data
- Signal processing for the internet of things
- Design/implementation of signal processing systems


ORGANIZING COMMITTEE

- General Co-Chairs: Mónica F. Bugallo, Stony Brook University, USA; Luis Castedo,
University of A Coruña, Spain
- Technical Program Chairs: Maria Sabrina Greco , University of Pisa, Italy; Marius
Pesavento, University of Darmstadt, Germany
- Publications Co-Chairs: Andrea Ferrari, University of Nice Sophia Antipolis, France;
Luca Martino, University Carlos III of Madrid, Spain
- Financial Chair: Ignacio Santamaría, University of Cantabria, Spain
- Special Sessions Co-Chairs: Markus Rupp, Vienna University of Technology, Austria;
Danilo Mandic, Imperial College London, UK
- Tutorials Co-Chairs: Aleksandar Dogand?i?, Iowa State University, USA; Mario A.T.
Figueiredo, University of Lisboa, Portugal
- Satellite Workshops Chair: Wolfgang Utschick, Technical University of Munich, Germany
- Students Activities Co-Chairs: Pau Closas, Northeastern University, USA; Jordi
Vilà-Valls, University of Toulouse / ISAE-SUPAERO, France
- Industrial Program Chair: Víctor Elvira, IMT Lille Douai, France
- Publicity Chair: Javier Vía, University of Cantabria, Spain
- International Liaisons: Ke Guan, Beijing Jiaotong University, China; Henry Argüello,
Industrial University of Santander, Colombia
- Local Chair: Roberto López-Valcarce, atlanTTic, University of Vigo, Spain


_______________________________________________
Announcements mailing list
https://lists.eurasip.org/mailman/listinfo/announcements

Back  Top

3-3-25(2019-09-10) CfP The 22nd conference on SPEECH,TEXT and DIALOGUE (TDS 2019), Ljubljana, Slovenia
**************************************************************************
                     TSD 2019 - FIRST CALL FOR PAPERS
**************************************************************************

               The twenty-second International Conference on
                   TEXT, SPEECH and DIALOGUE (TSD 2019)
                            Ljubljana, Slovenia
                           September 10-13, 2019
                       http://www.tsdconference.org


TSD HIGHLIGHTS

* Invited speakers: Denis Jouvet (Loria, Nancy, France), and more to come.
* TSD is traditionally published by Springer-Verlag and regularly listed in
  all major citation databases: Thomson Reuters Conference Proceedings
  Citation Index, DBLP, SCOPUS, EI, INSPEC, COMPENDEX, etc.
* TSD offers a high-standard transparent review process - double blind,
  final reviewers' discussion.
* TSD is going to take place in the beautiful centre of Ljubljana, the
  capital of Slovenia.
* The conference is organized in cooperation with the Faculty of Electrical
  Engineering, University of Ljubljana, Slovenia.
* TSD provides an all-service package (conference access and material, all
  meals, one social event, etc.) for an easily affordable fee.


PRELIMINARY DATES

March 31, 2019 ............... Deadline for submission of contributions
May 10, 2019 ................. Notification of acceptance or rejection
May 31, 2019 ................. Deadline for submission of camera-ready papers
September 10-13, 2019 ........ TSD2019 conference date

The proceedings will be provided on flash drives in form of navigable
content. Printed books will be available for extra fee.


TSD SERIES

The TSD series has evolved as a prime forum for interaction between
researchers in both spoken and written language processing from all over
the world. Proceedings of the TSD conference form a book published by
Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI)
series. The TSD proceedings are regularly indexed by Thomson Reuters
Conference Proceedings Citation Index. LNAI series are listed in all major
citation databases such as DBLP, SCOPUS, EI, INSPEC, or COMPENDEX.


TOPICS

Topics of the 22nd conference will include (but are not limited to):

    Speech Recognition (multilingual, continuous, emotional speech,
    handicapped speaker, out-of-vocabulary words, alternative way of
    feature extraction, new models for acoustic and language modeling).

    Corpora and Language Resources (monolingual, multilingual, text, and
    spoken corpora, large web corpora, disambiguation, specialized
    lexicons, dictionaries).

    Speech and Spoken Language Generation (multilingual, high fidelity
    speech synthesis, computer singing).

    Tagging, Classification and Parsing of Text and Speech (multilingual
    processing, sentiment analysis, credibility analysis, automatic text
    labeling, summarization, authorship attribution).

    Semantic Processing of Text and Speech (information extraction,
    information retrieval, data mining, semantic web, knowledge
    representation, inference, ontologies, sense disambiguation, plagiarism
    detection).

    Integrating Applications of Text and Speech Processing (machine
    translation, natural language understanding, question-answering
    strategies, assistive technologies).

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in dialogues).

    Multimodal Techniques and Modeling (video processing, facial animation,
    visual speech synthesis, user modeling, emotion and personality
    modeling).


PROGRAMME COMMITTEE

All programme committee members are listed on the conference web pages
https://www.kiv.zcu.cz/tsd2019/index.php?page=committees


OFFICIAL LANGUAGE

The official language of the event is English, however, papers on issues
related to text and speech processing in languages other than English are
strongly encouraged.


LOCATION

Ljubljana, the Slovenian capital - a city, whose name means `The beloved',
is a great place to visit, although you will not find world renowned
attractions here. Nevertheless, it has history, tradition, style, arts
& culture, an atmosphere that is both Central European and Mediterranean;
many also add the adjectives multilingual and hospitable. Being close to
many of the major sights and attractions of Slovenia, Ljubljana can also be
your starting point to discover the country's diversity.

Ljubljana is situated about halfway between Vienna and Venice. Its
character and appearance have been shaped by diverse cultural influences
and historical events. While in winter it is remarkable for its dreamy
Central European character, it is the relaxed Mediterranean feel that
stands out during summer.

Ljubljana is a picturesque city full of romantic views, with a medieval
castle towering over its historical city centre and a calm river spanned by
a series of beautiful bridges running right through it. It's a city with
a medieval heart, a city of the Baroque and Art Nouveau, with an old castle
resting above it like a sleeping beauty.

In Ljubljana eastern and western cultures met; and the Italian concept of
art combined with the sculptural aesthetics of Central European cathedrals.
The city owes its present appearance partly to Italian baroque and partly
to Art Nouveau, which is the style of the numerous buildings erected
immediately after the earthquake of 1895.

The central point of interest in Ljubljana is the Ljubljana Castle,
watching over the city from the centrally located castle hill. The
beginnings of the medieval castle go back to the 9th century, although the
castle building is first mentioned only in 1144. It gained its present
image after the earthquake of 1511 and following further renovations at the
beginning of the 17th century. At present, a funicular connects the Old
Town to the castle hill, adding an even more convenient access alternative
to the tourist train.

Ljubljana lies at the centre of Slovenia. In the morning you can visit the
stunningly beautiful Lake Bled, Lake Bohinj or Soca Valley in the high
mountainous region of the Alps, and in the evening enjoy the sunset in one
of the charming little towns on the Adriatic coast.
It only takes minutes to reach the peaceful and unspoiled countryside of
the city's green surrounding areas, which offer endless opportunities for
hiking, cycling, fishing and horse riding.

We are very excited of the fact that the TSD conference leaves the Czech
Republic for the first time within its 22-year history and that the TSD2019
is going to take place in such a wonderful location as Ljubljana.


ABOUT CONFERENCE

The conference is organized by the Faculty of Applied Sciences, University
of West Bohemia, Pilsen, the Faculty of Informatics, Masaryk University,
Brno, and the Faculty of Electrical Engineering, University of Ljubljana.


VENUE

    Faculty of Electrical Engineering - University of Ljubljana
    Trzaska cesta 25
    SI-1000 Ljubljana


CONTACT

The preferred way of contacting the conference organizing committee is
writing an e-mail to:

    Ms Lucie Tauchenova, TSD2019 Conference Secretary
    E-mail: tsd2019@tsdconference.org
    Phone: +420 702 994 699

All paper correspondence regarding the conference should be addressed to:

    TSD2019 - NTIS P2
    Fakulta aplikovanych ved
    Zapadoceska univerzita v Plzni
    Univerzitni 8
    CZ-306 14 Plzen
    Czech Republic

    Fax: +420 377 632 402 - Please, mark the faxed material with large
    capitals 'TSD' on top.

TSD2019 conference web site: http://www.tsdconference.org/
Back  Top

3-3-26(2019-09-12) Third International Conference on Natural Language and Speech Processing (ICNLSP 2019), University of Trento, Italy.

 ICNLSP 2019  , the third edition of the International Conference on Natural Language and Speech Processing will be held on September 12th, 13th 2019 at the  university of Trento , Italy.


ICNLSP 2015   and   ICNLSP 2018 are indexed in DBLP, and published in Elsevier and IEEExplore respectively.
 

ICNLSP aims to attract contributions related to natural language and speech processing in basic theories and applications as well. Regular and posters sessions will be organized, in addition to keynotes presented by senior international researchers.
This year, a workshop on NLP solutions for under-resourced languages will be held with ICNLSP.

Authors are invited to present their work relevant to the topics of the conference.

The following list includes the topics of ICNLSP 2019 but not limited to:

Signal processing, acoustic modeling
Architecture of speech recognition system
Deep learning for speech recognition
Analysis of speech
Paralinguistics in Speech and Language
Pathological speech and language
Speech coding
Speech comprehension
Summarization
Speech Translation
Speech synthesis
Speaker and language identification
Phonetics, phonology and prosody
Cognition and natural language processing
Text categorization
Sentiment analysis and opinion mining
Computational Social Web
Arabic dialects processing
Under-resourced languages: tools and corpora
New language models
Arabic OCR
Lexical semantics and knowledge representation
Requirements engineering and NLP
NLP tools for software requirements and engineering
Knowledge fundamentals
Knowledge management systems
Information extraction
Data mining and information retrieval
Machine translation


Submission

Papers must be submitted via the online paper submission system Easychair.

https://easychair.org/conferences/?conf=icnlsp2019

Each submitted paper will be reviewed by three program committee members.


 

Workshop

The workshop on NLP Solutions for Under Resourced Languages  NSURL 2019 will be held with  ICNLSP 2019 .


Important dates

Submission deadline: 30 April 2019

Notification of acceptance: 15 June 2019

Camera-ready paper due: 10 July 2019

Conference dates: 12, 13 September 2019


Chairs:

Dr. Mourad Abbas

Dr. Abed Alhakim Freihat


Contact:

icnlsp2019@easychair.org

Back  Top

3-3-27(2019-09-15) Zero Resource Speech Challenge 2019: TTS without T

Zero Resource Speech Challenge 2019: TTS without T

 
Dear Colleague,
 
We have the pleasure to announce the new iteration of the Zero Ressource Speech Challenge, which is submitted to Interspeech 2019. Its aim is to build a speech synthesizer  without any text or phonetic labels (hence Text to Speech without T). We takes inspiration from young infants who learn to talk before they learn to read or write. Here, the task is to discover a pseudo-text (a sub-word symbolic representation internal to the machine), from raw speech, without any labels, and to use these discovered units to resynthesize new utterances in a target voice.
 
The Challenge is now open (deadline March 15, 2019). Details and registration at http://www.zerospeech.com/2019.

 

The organizers
Dunbar, E., Algayres, R., Benjumea, J., Karadayi, J., Cao, X-N., Bernard, M., Ondel, L., Besacier, L., Sakti, S., & Dupoux, E.
 
NoteThe Challenge is a continuation of the 'sub-word unit discovery' task of previous ZeroSpeech challenges, and is open to everyone (including participants concentrating solely on sub-word unit discovery or solely on synthesis, as well as participants building complete end-to-end systems).
Back  Top

3-3-28(2019-09-15) ASVspoof 2019 CHALLENGE

*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*

ASVspoof 2019 CHALLENGE:
Future horizons in spoofed/fake audio detection
http://www.asvspoof.org/ 
=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=*=

Can you distinguish computer-generated or replayed speech from authentic/bona fide speech? Are you able to design algorithms to detect spoofs/fakes automatically?

Are you concerned with the security of voice-driven interfaces?

Are you searching for new challenges in machine learning and signal processing?

 

Join ASVspoof 2019 ? the effort to develop next-generation countermeasures for the automatic detection of spoofed/fake audio. In combining the forces of leading research institutes and industry, ASVspoof 2019 encompasses two separate sub-challenges in logical and physical access control, and provides a common database of the most advanced spoofing attacks to date. The aim is to study both the limits and opportunities of spoofing countermeasures in the context of automatic speaker verification and fake audio detection.


CHALLENGE TASK

Given a short audio clip, determine whether it represents authentic/bona fide human speech, or a spoof/fake (replay, synthesized speech or converted voice). You will be provided with a large database of labelled training and development data and will develop machine learning and signal processing countermeasures to distinguish automatically between the two. Countermeasure performance will be evaluated jointly with an automatic speaker verification (ASV) system provided by the organisers.

 

BACKGROUND:
The ASVspoof 2019 challenge follows on from two previous ASVspoof challenges, held in 2015 and 2017. The 2015 edition focused on spoofed speech generated with text-to-speech (TTS) and voice conversion (VC) technologies. The 2017 edition focused on replay spoofing. The 2019 edition is the first to address all three forms attack and the latest, cutting-edge spoofing attack technology.

 

ADVANCES:

Today?s state-of-the-art, TTS and VC technologies produce speech signals that are as good as perceptually indistinguishable from bona fide speech. The LOGICAL ACCESS sub-challenge aims to determine whether the advances in TTS and VC pose a greater threat to the reliability of automatic speaker verification and spoofing countermeasure technologies. The PHYSICAL ACCESS sub-challenge builds upon the 2017 edition with a far more controlled evaluation setup which extends the focus of ASVspoof to fake audio detection in, e.g. the manipulation of voice-driven interfaces (smart speakers).

 

METRICS:

The 2019 edition also adopts a new metric, the tandem detection cost function (t-DCF). Adoption of the t-DCF metric aligns ASVspoof more closely to the field of ASV. The challenge nonetheless focuses on the development of standalone spoofing countermeasures; participation in ASVspoof 2019 does NOT require any expertise in ASV. The equal error rate (EER) used in previous editions remains as a secondary metric, supporting the wider implications of ASVspoof involving fake audio detection.

SCHEDULE:

Training and development data release: 19th December 2018

Evaluation data release: 15th February 2019

Deadline to submit evaluation scores: 22nd February 2019

Organisers return results to participants: 15th March 2019

INTERSPEECH paper submission deadline: 29th March 2019

 

REGISTRATION:

Registration should be performed once only for each participating entity and by sending an email to registration@asvspoof.org with ?ASVspoof 2019 registration? as the subject line. The mail body should include: (i) the name of the team; (ii) the name of the contact person; (iii) their country; (iv) their status (academic/non-academic), and (v) the challenge scenario(s) for which they wish to participate (indicative only). Data download links will be communicated to registered contact persons only.

 

MAILING LIST:

Subscribe to general mailing list by sending e-mail with subject line ?subscribe asvspoof2019? to sympa@asvspoof.org. To post messages to the mailing list itself, send e-mails to asvspoof2019@asvspoof.org

 

ORGANIZERS*:

Junichi Yamagishi, NII, Japan & Univ. of Edinburgh, UK

Massimiliano Todisco, EURECOM, France

Md Sahidullah, Inria, France

Héctor Delgado, EURECOM, France

Xin Wang, National Institute of Informatics, Japan

Nicholas Evans, EURECOM, France

Tomi Kinnunen, University of Eastern Finland, Finland

Kong Aik Lee, NEC, JAPAN

Ville Vestman, University of Eastern Finland, Finland

* Equal contribution

 

CONTRIBUTORS:

University of Edinburgh, UK; Nara Institute of Science and Technology, Japan, University of Science and Technology of China, China; iFlytek Research, China; Saarland University / DFKI GmbH, Germany; Trinity College Dublin, Ireland; NTT Communication Science Laboratories, Japan; HOYA, Japan; Google LLC (Text-to-Speech team, Google Brain team, Deepmind); University of Avignon, France; Aalto University, Finland; University of Eastern Finland, Finland; EURECOM, France.


FURTHER INFORMATION:

info@asvspoof.org

Back  Top

3-3-29(2019-09-15) The VOICES from a Distance Challenge 2019, Graz, Austria
The VOiCES from a Distance Challenge 2019
 
SRI International and Lab 41 are organizing a speaker and speech recognition challenge for Interspeech 2019, focused especially on distant/far-field speech: 'The VOiCES from a Distance Challenge 2019'. This challenge is based on the recently collected VOiCES data, recorded in real reverberant and noisy environments (https://voices18.github.io/). The data is collected in multiple rooms with background distractors (music, TV, babble) and microphone types. Evaluation findings and papers will form part of a special session hosted at Interspeech 2019. The participating teams will get early access to the VOiCES phase 2 data, that will form the evaluation set for the challenge.
 
 
Both speaker recognition and automatic speech recognition (ASR) tasks will have two tracks:
(i) Fixed System - Training data is limited to specific datasets
(ii) Open System - Participants can use any datasets they have access to (private or public)
 
 
Challenge Timeline:
 
January 15, 2019 Release of the evaluation plan and development sets
February 25, 2019 Evaluation data available
March 4, 2019 System output submission deadline (11:59 PM PST)
March 11, 2019 Release of evaluation results 
March 15, 2019 System description submission and release of VOiCES phase 2 data and evaluation keys for the participating teams 
March 29, 2019 Regular paper submission deadline for Interspeech 2019
 
More information about the challenge including evaluation plan, evaluation data, and registration link can be found at: https://voices18.github.io/Interspeech2019_SpecialSession/
 
For more information, please reach out to voices_poc@sri.com 
 
Organizers, 
Mahesh Kumar Nandwana, Julien van Hout, Mitchell McLaren, Colleen Richey, Aaron Lawson (SRI International)
Maria Alejandra Barrios (Lab41) 
Back  Top

3-3-30(2019-10-21) Second International Workshop on Multimedia Content Analysis in Sports, Nice, France

Call for Papers

-------------------

Second International Workshop on Multimedia Content Analysis in Sports @ ACM Multimedia, October 21-25, 2019, Nice, France

 

We'd like to invite you to submit your paper proposals for the 2nd International Workshop on Multimedia Content Analysis in Sports to be held in Nice, France together with ACM Multimedia 2019. The ambition of this workshop is to bring together researchers and practitioners from different disciplines to share ideas on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:

 

– annotation and indexing

– athlete and object tracking

– activity recognition, classification and evaluation

– event detection and indexing

– performance assessment

– injury analysis and prevention

– data driven analysis in sports

– graphical augmentation and visualization in sports

– automated training assistance

– camera pose and motion tracking

– brave new ideas / extraordinary multimodal solutions

 

Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of the paper.

 

Please refer to the workshop website for further information:

http://multimedia-computing.de/mmsports2019/

 

IMPORTANT DATES

Submission Due:                           July 8, 2019

Acceptance Notification:             August 5, 2019

Camera Ready Submission:        August 19, 2019

Workshop Date:                           TBA; either Oct 21 or Oct 25, 2019

 

____________________________________________

Prof. Dr. Rainer Lienhart

Multimedia Computing & Computer Vision

Institut für Informatik, Universität Augsburg

Informatik Building N, Room # 1013

Universitätsstr. 6 a, 86159 Augsburg, Germany

email: Rainer.Lienhart@informatik.uni-augsburg.de

phone: +49 (821) 598-5703 cell: +49 (163) 960 5367

Skype: skype@videoanalysis.org. Threema oder FaceTime

____________________________________________

Back  Top

3-3-31(2019-X-X) Dialog System Technology Challenge 7 (DSTC7)

Dialog System Technology Challenge 7 (DSTC7)
Call for Participation: Data distribution has been started
Website: http://workshop.colips.org/dstc7/index.html

========================================

Background
-----------------
The DSTC shared tasks have provided common testbeds for the dialog
research community since 2013.

From its sixth edition, it has been rebranded as 'Dialog System
Technology Challenge' to cover a wider variety of dialog related problems.

For this year's challenge, we opened the call for track proposals and
selected the following three parallel tracks by peer-reviews:

- Sentence Selection Track
- Sentence Generation Track
- Audio Visual Scene-aware dialog (AVSD) Track

Participation is welcomed from any research team (academic, corporate,
non-profit, government).

Important Dates
------------------------
- Jun 1, 2018: Training data is released
- Sep 10, 2018: Test data is released
- Sep 24, 2018: Entry submission deadline
- Oct or Nov 2018: Paper submission deadline
- Spring 2019: DSTC7 special session or workshop (venue: TBD)

DSTC7 Organizing Committee
--------------------------------------------
- Koichiro Yoshino - Nara Institute of Science and Technology (NAIST), Japan
- Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA
- Julien Perez - Naver Labs Europe, France
- Luis Fernando D'Haro - Institute for Infocomm Research (I2R), Singapore

DSTC7 Track Organizers
-------------------------------------
Sentence Selection Track:
- Lazaros Polymenakos - IBM Research, USA
- Chulaka Gunasekara - IBM Research, USA
- Walter S. Lasecki - University of Michigan, USA
- Jonathan Kummerfeld - University of Michigan, USA

Sentence Generation Track:
- Michel Galley - Microsoft Research AI&R, USA
- Chris Brockett - Microsoft Research AI&R, USA
- Jianfeng Gao - Microsoft Research AI&R, USA
- Bill Dolan - Microsoft Research AI&R, USA

Audio Visual Scene-aware dialog (AVSD) Track:
- Chiori Hori - Mitsubishi Electric Research Laboratories (MERL), USA
- Tim K. Marks - Mitsubishi Electric Research Laboratories (MERL), USA
- Devi Parikh - Georgia Tech, USA
- Dhruv Batra - Georgia Tech, USA

DSTC Steering Committee
---------------------------------------
- Jason Williams - Microsoft Research (MSR), USA
- Rafael E. Banchs - Institute for Infocomm Research (I2R), Singapore
- Seokhwan Kim - Adobe Research, USA
- Matthew Henderson - PolyAI, Singapore
- Verena Rieser - Heriot-Watt University, UK

Contact Information
---------------------------------------
Join the DSTC mailing list to get the latest updates about DSTC7:

- To join the mailing list: send an email to
listserv@lists.research.microsoft.com and put 'subscribe DSTC' in the
body of the message (without the quotes).
- To post a message: send your message to dstc@lists.research.microsoft.com.

For specific enquiries about DSTC7:
- Please feel free to contact any of the Organizing Committee members
directly.


Back  Top

3-3-32(2020-05-11) The 12th edition of the Language Resources and Evaluation Conference (LREC 2020) , Marseille, France

ELRA is very pleased to announce that the 12th edition of the Language Resources and
Evaluation Conference, LREC 2020,  will take place in Marseille (France) on May 11-16,
2020.
More information will be published on the conference website (online soon).

Best wishes,
Helene Mazo
on behalf of the LREC 2020 Programme Committee

www.lrec-conf.org
Follow us on Twitter: @LREC2020

Back  Top

3-3-33(2020-05-xx) 2020 Speech Prosody conference, Tokyo, Japan

Dear SProSIG Members,

 

We are pleased to announce that the 2020 Speech Prosody conference

will be held in Tokyo, tentatively in late May or early June.

 

Also, there are two upcoming special sessions relating to prosody, at

ICPhs 2019, with submission deadlines for both of December 4:

'Interacting Channels of Speech - Tune and Text'

https://timo-roettger.weebly.com/icphs---tune-and-text.html and

'Modeling Meaning-Bearing Configurations of Prosodic Features'

http://www.cs.utep.edu/nigel/pconstructions/icphs-configs.html .

 

We'd also like to take this opportunity to introduce ourselves, the

incoming officers of SProSig for 2018-2020: namely Martine Grice,

Plinio Barbosa, Hongwei Ding, Aoju Chen and myself.  We look forward

to serving the membership and are eager to hear your ideas and

suggestions.

 

Finally, the SProSIG mailing list is now hosted at the University of

Texas at El Paso. Subscription/unsubscription instructions are below.

Mailings will continue to be infrequent and focus on conference

announcements and the like. If you have such information to share,

please contact any of us.

 

Hongwei Ding, Aoju Chen, Martine Grice, Plinio Barbosa, Nigel Ward

Speech Prosody Special Interest Group  www.sprosig.org

 

This mail was sent through the SProSIG mailing list, which is for

announcements of interest to the speech prosody research community.

Subscribe/unsubscribe at http://listserv.utep.edu/mailman/listinfo/sprosig

 

Nigel Ward, Professor of Computer Science, University of Texas at El Paso

CCSB 3.0408,  +1-915-747-6827

nigel@utep.edu    http://www.cs.utep.edu/nigel/   

 

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA