ISCA - International Speech
Communication Association


ISCApad Archive  »  2022  »  ISCApad #290  »  Events

ISCApad #290

Saturday, August 06, 2022 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2022-09-18) Call for Mentors for One-to-one Mentoring at Interspeech 2022

Call for Mentors for One-to-one Mentoring at Interspeech 2022
An initiative of the ISCA Mentoring Committee and ISCA-SAC


The ISCA Mentoring Committee and ISCA-SAC are happy to announce that one-to-one mentoring sessions are to return to Interspeech this year. The ISCA Mentoring Committee, in collaboration with ISCA-SAC, are inviting registrations from senior and mid-career speech researchers to act as mentors for students and early career researchers.

One-to-one mentoring was piloted in Brno last year with great success and this year in Incheon, there will be another great opportunity to connect with researchers from various backgrounds and experience levels, and engage in meaningful conversations.

Once registered, we will introduce you to your mentee during Interspeech (online or in-person) and let you arrange the best time/place for your conversation. This is intended to be a one-time session of approximately 40 minutes, although it is up to you and the participant whether you need more or less time for your discussion. Sessions can be conducted online or face-to-face, so delegates attending in-person and virtually can take part.

If you feel that you would be able to give meaningful advice and answer questions from participants, we warmly invite you to register as a mentor via our registration form.

Please also share this call in your networks if you know someone who would make a good mentor.

If you have any questions, do get in touch (sac@isca-speech.org, catarina.t.botelho@tecnico.ulisboa.pt & judith.dineley@kcl.ac.uk).

We look forward to meeting you at Interspeech 2022!

Warm regards,
The ISCA Mentoring Committee & ISCA-SAC

Back  Top

3-1-2(2022-09-18) INTERSPEECH 2022 HUMAN AND HUMANIZING SPEECH TECHNOLOGY, Incheon Songdo Convensia, Korea.

 INTERSPEECH 2022
HUMAN AND HUMANIZING SPEECH TECHNOLOGY

September 18-22, 2022
Incheon Songdo Convensia, Korea


INTERSPEECH 2022 will be held in Incheon, Korea on September 18-22, 2022. INTERSPEECH is the world's largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications.

The theme of INTERSPEECH 2022 is 'Human and Humanizing Speech Technology'. Over the history of mankind, human's ability to formulate thoughts and complex feelings, and communication has been evolved by basically talking to the people around them. However, as machines become ever more present in our daily lives, so grows our need for realizing natural interaction with them. With the rapid progress in AI on speech and language applications over the 5G network services provided worldwide, we are at the onset of building our vision of creating a full ecosystem of natural speech and language technology applications. The conference theme of 'Human and Humanizing Speech Technology' formulates the vision of the scientific and industrial community to commit endeavors to continue the effort in speech science toward humanizing the spoken language technology, so the impact becomes a game changer that goes beyond the current state-of-the-art we serve, and ultimately the society as a whole experie nce the benefits.

The paper submission has been opened on January 21, 2022. Submit your papers by March 21, 2022 to be considered!



Call for Papers

INTERSPEECH 2022 seeks original, novel and innovative papers covering all aspects of speech science and technology, ranging from basic theories to advanced applications. Papers addressing scientific area topics related to the conference theme should be submitted electronically through the START V2 system. The working language of the conference is English, and papers must be written in English. The paper length should be up to four pages in two columns. An additional page can be used for references only. Paper submissions must conform to the format defined in the paper preparation guidelines as instructed in the author's kit on the conference webpage. Submissions may also be accompanied by additional files such as multimedia files. Authors must declare that their contributions are original and have not been submitted elsewhere for publication. Contributed papers will be in the rigorous peer-review process. Each paper will be evaluated on the basis of these criteria; novelty and origin ality, technical correctness, clarity of presentation, key strength, and quality of references.


Scientific Areas and Topics

INTERSPEECH 2022 embraces a broad range of science and technology in speech, language and communication areas, including the following topics, but not limited to:

  • Speech Perception, Production and Acquisition
  • Phonetics, Phonology, and Prosody
  • Analysis of Paralinguistics in Speech and Language
  • Speaker and Language Identification
  • Analysis of Speech and Audio Signals
  • Speech Coding and Enhancement
  • Speech Synthesis and Spoken Language Generation
  • Speech Recognition - Signal Processing, Acoustic Modeling, Robustness, Adaptation
  • Speech Recognition - Architecture, Search, and Linguistic Components
  • Speech Recognition - Technologies and Systems for New Applications
  • Spoken Dialog Systems and Analysis of Conversation
  • Spoken Language Processing - Translation, Information Retrieval, Summarization, Resources and Evaluation
  • Speech, voice, and hearing disorders

 



Technical Program Committee Chairs

Kyogu Lee | Seoul National University, Korea (kglee@snu.ac.kr)
Lori Lamel | CNRS-LISN, France, France (lamel@limsi.fr)
Mark Hasegawa-Johnson | University of Illinois, UC, USA (jhasegaw@illinois.edu)
Karen Livescu | TTIC-University of Chicago, USA (klivescu@ttic.edu)
Okim Kang | Northern Arizona University, USA (okim.kang@nau.edu)


Paper Submission



Important Dates for INTERSPEECH 2022

Papers
Paper Submission Deadline: March 21, 2022
Paper Update Deadline: March 28, 2022
Paper Notification: June 08, 2022
Author Notification: June 13, 2022
Final Paper Upload: June 23, 2022


Visit the INTERSPEECH 2022 Website for More Information


Website: www.interspeech2022.org / E-mail: info@interspeech2022.org

Back  Top

3-1-3(2023-08-20) Interspeech 2023, Dublin, Ireland

, ISCA has reached the decision to hold INTERSPEECH-2023 in Dublin, Ireland (Aug. 20-24, 2023)

Back  Top

3-1-4(2024-07-02) 12th Speech Prosody Conference @Leiden, The Netherlands

Dear Speech Prosody SIG Members,

 

Professor Barbosa and I are very pleased to announce that the 12th Speech Prosody Conference will take place in Leiden, the Netherlands, July 2-5, 2024, and will be organized by Professors Yiya Chen, Amalia Arvaniti, and Aoju Chen.  (Of the 303 votes cast, 225 were for Leiden, 64 for Shanghai, and 14 indicated no preference.) 

 

Also I'd like to remind everyone that nominations for SProSIG officers for 2022-2024 are being accepted still this week, using the form at http://sprosig.org/about.html, to Professor Keikichi Hirose.  If you are considering nominating someone, including yourself, feel free to contact me or any current officer to discuss what's involved and what help is most needed.

 

Nigel Ward, SProSIG Chair

Professor of Computer Science, University of Texas at El Paso

CCSB 3.0408,  +1-915-747-6827

nigel@utep.edu    https://www.cs.utep.edu/nigel/   

 

 

Back  Top

3-1-5(2024-09-01) Interspeech 2024, Jerusalem, Israel.

 ISCA conference committee has decided Interspeech 2024 will be held in

Jerusalem, Israel from

September 1 till September 5

Back  Top

3-1-69th Students meet Experts at Interspeech 2022

9th Students meet Experts at Interspeech 2022

Date:  Wednesday, September 21th, from 16h to 17h KST (Korean time)

Location: Incheon, South Korea and online (tentative)

Panel of Experts: To be confirmed

 

After successful editions in Lyon (2013), Singapore (2014), San Francisco (2016), Stockholm (2017), Hyderabad (2018), Graz (2019), virtually in both Shanghai (2020) and Brno (2021), we are excited to announce that the Students Meet Experts event is now coming to Interspeech 2022 in Incheon, South Korea. We will have a panel discussion with experts from academia and industry in a hybrid format.

 

We encourage you to submit questions before the event. A selection of the submitted questions will be answered from the panel of experts. Please keep in mind that the experts and the audience are coming from different fields, so field specific and technical questions are less likely to be presented to the panel. To submit your questions and/or register for this event, we ask you to fill in the form, which will be announced soon. 

 

Registration and question form:https://forms.gle/yuQh4Zq1wxkAoRHb8 

 

Contact:

Thomas Rolland: sac@isca-speech.org

Back  Top

3-1-79th Students meet Experts at Interspeech 2022

9th Students meet Experts at Interspeech 2022

Date:  Wednesday, September 21th, from 16h to 17h KST (Korean time)

Location: Incheon, South Korea and online (tentative)

Panel of Experts: To be confirmed

 

After successful editions in Lyon (2013), Singapore (2014), San Francisco (2016), Stockholm (2017), Hyderabad (2018), Graz (2019), virtually in both Shanghai (2020) and Brno (2021), we are excited to announce that the Students Meet Experts event is now coming to Interspeech 2022 in Incheon, South Korea. We will have a panel discussion with experts from academia and industry in a hybrid format.

 

We encourage you to submit questions before the event. A selection of the submitted questions will be answered from the panel of experts. Please keep in mind that the experts and the audience are coming from different fields, so field specific and technical questions are less likely to be presented to the panel. To submit your questions and/or register for this event, we ask you to fill in the form, which will be announced soon. 

 

Registration and question form:https://forms.gle/yuQh4Zq1wxkAoRHb8 

 

Contact:

Thomas Rolland: sac@isca-speech.org

Back  Top

3-1-8ISCA INTERNATIONAL VIRTUAL SEMINARS

 

Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers:  see

 

https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars

ISCA INTERNATIONAL VIRTUAL SEMINARS

A seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However,  presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.

ISCA has set up a pool of speakers prepared to give on-line talks. In this way we can enhance the experience of students working in our field, often in difficult conditions. To find details of the speakers,

  • visit isca-speech.org
  • Click Distinguished Lecturers in the left panel
  • Online Seminars then appears beneath Distinguished Lecturers: click that.

Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials.

If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org

If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org


PS. The online seminar scheme  is now up and running, with 7 speakers so far:

 

Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.



Back  Top

3-1-9Speech Prosody courses

Dear Speech Prosody SIG Members,

We would like to draw your attention to three upcoming short courses from the Luso-Brazilian Association of Speech Sciences:

- Prosody & Rhythm: applications to teaching rhythm,
  Donna Erickson (Haskins), March 16, 19, 23 and 26

- Prosody, variation and contact,
  Barbara Gili Fivela (University of Salento, Italy), April 19, 21, 23, 26 and 28

- Rhythmic analysis of languages: main challenges,
  Marisa Cruz (University of Lisbon), June 2, 3, 4, 7, 8 and 10

For details:
  http://www.letras.ufmg.br/padrao_cms/index.php?web=lbass&lang=2&page=3670&menu=&tipo=1
 
 
 
Plinio Barbosa and Nigel Ward

Back  Top

3-2 ISCA Supported Events
3-2-1(2022-09-07) CfP Special sessions of SIGDIAL, Edinburgh, UK
The Special Interest Group on Discourse and Dialogue (SIGDIAL) organizers welcome the submission of
special session proposals. We welcome special session proposals on any topic of interest to the discourse and
dialogue communities. Topics of interest include, but are not limited to Role of Discourse in NLP Applications,
Explainable AI, Evaluation, Annotation, End‐to‐end systems, Vision and Language, and Human-Robot Interaction.
 
A SIGDIAL special session is the length of a regular session at the conference, and may be organized as a
poster session, a panel session, a poster session with panel discussion, or an oral presentation session.
Special sessions may, at the discretion of the SIGDIAL organizers, be held as parallel sessions. The papers
submitted to special sessions are handled by the special session organizers, but for the submitted papers
to be in the SIGDIAL proceedings, they have to undergo the same review process as regular papers.
The reviewers for the special session papers will be taken from the SIGDIAL program committee itself,
taking into account the suggestions of the session organizers, and the program chairs will make acceptance
decisions. In other words, special session organizers decide what appears in the session, while the program
chairs decide what appears in the proceedings and the rest of the conference program.
 
Submissions
Those wishing to organize a special session should prepare a two-page proposal containing: a summary of
the topic of the special session; a list of organizers and sponsors; a list of people who may submit and
participate in the session; and a requested format (poster/panel/oral session).
 
These proposals should be sent to conference@sigdial.org by the special session proposal deadline.
Special session proposals will be reviewed jointly by the general chair and program co‐chairs.
 
Links
Those wishing to propose a special session may want to look at some of the sessions organized at recent
SIGDIAL meetings.
 
Important Dates
Mar 12, 2022: Special Session Proposal Deadline
Mar 26, 2022: Special Session Notification
 

 

---
Back  Top

3-2-2(2022-09-07) The 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2022), Edinburgh, UK (update)
The 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2022) will be held as a hybrid conference at Heriot-Watt University in Edinburgh and online between September 7-9, 2022.
 
The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in dialogue and discourse to both academic and industry researchers. Continuing a series of 22 successful previous meetings, this conference spans the research interest areas of discourse and dialogue. The conference is sponsored by the SIGdial organization, which serves as the Special Interest Group in discourse and dialogue for both ACL and ISCA.
 
Topics of Interest
 
We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:
 
  • Discourse Processing: Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval.
  • Dialogue Systems: Open domain, task oriented dialogue, and chat systems. Knowledge graphs and dialogue. Dialogue state tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Spoken, multi-modal, embedded, situated, and text/web based dialogue systems, their components, evaluation and applications.
  • Corpora, Tools and Methodology: Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.
  • Pragmatic and/or Semantic Modeling: Pragmatics or semantics of discourse and dialogue (i.e., beyond a single sentence), e.g., rational speech act, conversation acts, intentions, conversational implicature, presuppositions.
  • Applications of Dialogue and Discourse Processing Technology
 
Submissions
 
The program committee welcomes the submission of long papers, short papers, and demo descriptions. Papers submitted as long papers may be accepted as long papers for oral presentation or long papers for poster presentation. Accepted short papers will be presented as posters.
 
  • Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be no longer than 8 pages, including title, text, figures and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices containing sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.
  • Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead short papers should have a point that can be made in a few pages, such as a small, focused contribution; a negative result; or an interesting application nugget. Short papers should be no longer than 4 pages including title, text, figures and tables. An unlimited number of pages is allowed for references. One additional page is allowed for sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.
  • Demo descriptions should be no longer than four pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.
 
Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.
 
Multiple Submissions
 
SIGDIAL 2022 cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of SIGDIAL. Any questions regarding submissions can be sent to sigdial2022pcs@googlegroups.com
 
Blind Review
 
Building on previous years’ move to anonymous long and short paper submissions, SIGDIAL 2022 will follow the ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.
 
Submission Format
 
All long, short, and demonstration submissions must follow the two-column ACL format, which are available as an Overleaf template and also downloadable directly (Latex and Word)
 
Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.
 
Submission Link and Deadline
 
SIGDIAL will accept regular submissions through the softconf system, as well as commitment of already reviewed papers through the ACL Rolling Review (ARR) system.
 
Regular submission:
 
Authors have to fill in the submission form in the START system and upload an initial pdf of their papers before May 11, 2022 (23:59 GMT-11). ***The title, authors, and abstract cannot be changed after this date.*** The final PDF needs to be uploaded by May 19, 2022 (23:59 GMT-11). Details will be posted at the conference website.
 
Conference Website: https://2022.sigdial.org/ 
 
For special session long and short papers please select the session for “Submission Type”.
 
Commitment via ACL Rolling Review (ARR):
 
Please refer to the ARR Call for Papers for detailed information about submission guidelines to ARR. The commitment deadline for SIGDIAL 2022 (deadline for authors to submit their reviewed papers, reviews, and meta-review to SIGDIAL 2022) is June 18, 2022. Note that the paper needs to be fully reviewed by ARR in order to make a commitment, thus the latest date for ARR submission will be April 15, 2022.
 
Mentoring
 
Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.
 
Best Paper Awards
 
In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2022 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.
The 23rd Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL 2022) will be held as a hybrid conference at Heriot-Watt University in Edinburgh and online between September 7-9, 2022.
 
The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in dialogue and discourse to both academic and industry researchers. Continuing a series of 22 successful previous meetings, this conference spans the research interest areas of discourse and dialogue. The conference is sponsored by the SIGdial organization, which serves as the Special Interest Group in discourse and dialogue for both ACL and ISCA.
 
Topics of Interest
 
We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:
 
  • Discourse Processing: Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval.
  • Dialogue Systems: Open domain, task oriented dialogue, and chat systems. Knowledge graphs and dialogue. Dialogue state tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Spoken, multi-modal, embedded, situated, and text/web based dialogue systems, their components, evaluation and applications.
  • Corpora, Tools and Methodology: Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.
  • Pragmatic and/or Semantic Modeling: Pragmatics or semantics of discourse and dialogue (i.e., beyond a single sentence), e.g., rational speech act, conversation acts, intentions, conversational implicature, presuppositions.
  • Applications of Dialogue and Discourse Processing Technology
 
Submissions
 
The program committee welcomes the submission of long papers, short papers, and demo descriptions. Papers submitted as long papers may be accepted as long papers for oral presentation or long papers for poster presentation. Accepted short papers will be presented as posters.
 
  • Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be no longer than 8 pages, including title, text, figures and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices containing sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.
  • Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead short papers should have a point that can be made in a few pages, such as a small, focused contribution; a negative result; or an interesting application nugget. Short papers should be no longer than 4 pages including title, text, figures and tables. An unlimited number of pages is allowed for references. One additional page is allowed for sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.
  • Demo descriptions should be no longer than four pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.
 
Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.
 
Multiple Submissions
 
SIGDIAL 2022 cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of SIGDIAL. Any questions regarding submissions can be sent to sigdial2022pcs@googlegroups.com
 
Blind Review
 
Building on previous years’ move to anonymous long and short paper submissions, SIGDIAL 2022 will follow the ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.
 
Submission Format
 
All long, short, and demonstration submissions must follow the two-column ACL format, which are available as an Overleaf template and also downloadable directly (Latex and Word)
 
Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.
 
Submission Link and Deadline
 
SIGDIAL will accept regular submissions through the softconf system, as well as commitment of already reviewed papers through the ACL Rolling Review (ARR) system.
 
Regular submission:
 
Authors have to fill in the submission form in the START system and upload an initial pdf of their papers before May 11, 2022 (23:59 GMT-11). ***The title, authors, and abstract cannot be changed after this date.*** The final PDF needs to be uploaded by May 19, 2022 (23:59 GMT-11). Details will be posted at the conference website.
 
Conference Website: https://2022.sigdial.org/ 
 
For special session long and short papers please select the session for “Submission Type”.
 
Commitment via ACL Rolling Review (ARR):
 
Please refer to the ARR Call for Papers for detailed information about submission guidelines to ARR. The commitment deadline for SIGDIAL 2022 (deadline for authors to submit their reviewed papers, reviews, and meta-review to SIGDIAL 2022) is June 18, 2022. Note that the paper needs to be fully reviewed by ARR in order to make a commitment, thus the latest date for ARR submission will be April 15, 2022.
 
Mentoring
 
Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.
 
Best Paper Awards
 
In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2022 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.
Back  Top

3-2-3(2023-01-07) SLT-CODE Hackathon Announcement , Doha, Qatar

SLT-CODE Hackathon Announcement

 

Have you ever asked yourself how your smartphone recognizes what you say and who you are?

 

Have you ever thought about how machines recognize different languages?

 

If that is your case, join us for a two-day speech and language technology hackathon. We will answer these questions and build fantastic systems with the guidance of top language and speech scientists in a collaborative environment.

 

The two-day speech and language technology hackathon will take place during the IEEE Spoken Language Technology (SLT) Workshop in Doha, Qatar, on January 7th and 8th, 2023. This year's Hackathon will be inspiring, momentous, and fun. The goal is to build a diverse community of people who want to explore and envision how machines understand the world's spoken languages.

 

During the Hackathon, you will be exposed (but not limited) to speech and language toolkits like ESPNet, SpeechBrain, K2/Kaldi, Huggingface, TorchAudio, or commercial APIs like Amazon Lex, etc., and you will be hands-on using this technology.

 

At the end of the Hackathon, every team will share their findings with the rest of the participants. Selected projects will have the opportunity to be presented at the SLT workshop.

 

The Hackathon will be at the Qatar Computing Research Institute (QCRI) in Doha, Qatar (GMT+3). In-person participation is preferred; however, remote participation is possible by joining a team with at least one person being local.

 

More information on how to apply and important dates are available at our website https://slt2022.org/hackathon.php

 

Interested? Apply here: https://forms.gle/a2droYbD4qset8ii9 The deadline for registration is September 30th, 2022.

 

If you have immediate questions, don't hesitate to contact our hackathon chairs directly at hackathon.slt2022@gmail.com.

Back  Top

3-2-4(2023-01-09) SLT2022, Doha, Qatar

Languages of the World, Doha, Qatar

9th to 12th January, 2023

 



CALL FOR PAPERS IS ALREADY OPEN

The2022 IEEE Spoken Language Technology Workshop(SLT2022) will be held on 9th - 12th January 2023 at Doha, Qatar. SLT 2022 will be the first speech conference to have visited the Middle East and the first speech conference to be held in an Arabic speaking nation.

The SLT Workshop is a flagship event of IEEE Speech and Language Processing Technical Committee. The workshop is held every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding.

 

MoreInformation:

https://slt2022.org

 

We invite papers in all areas of spoken language processing, with emphasis placed on the following topics:

Automatic speech recognition

Conversational/multispeaker ASR Far-field speech processing

Speaker and language recognition Spoken language understanding

Spoken dialog systems

Low resource/multilingual

Language processing

Spokendocumentretrieval

Speech-to-speech translation Text-to-speech systems Speech summarization

New applications of automatic speech recognition

Audio-visual/Multimodal speech processing

Emotion recognition from speech



 

SLT2022 will also feature Speech Hackathon to provide hands-on

element for students and young professionals.

 

 

 

Important dates

 

Paper submission: July15,2022

Paper Update: July21,2022

Rebuttal period: August26-31,2022

Paper Notification: Sept30,2022

Early Registration Period:Oct,2022

Speech Hackathon: Jan8-9,2023

Arabic Speech Meeting: Jan13,2023

 

 

GeneralChair

Ahmed Ali(QCRI)

Bhuvana Ramabhadran(Google)

Technical chairs

Shinji Watanabe(CarnegieMellonUniversity)

Mona Diab(FaceBook)

Sanjeev Khudanpur(JohnsHopkinsUniversity)

Julia Hirschberg(ColumbiaUniversity)

Murat Saraclar(BogaziciUniversity)

Marc Delcroix(NTTCommunicationScienceLaboratories)

 

Regional publicity chairs

Sebastian Möller(TUBerlin)

Tomoki Toda (Nagoya University)

Finance chair

Jan Trmal(JohnsHopkinsUniversity)

Juan Rafael Orozco Arroyave(UdeA,Colombia)

 

Sponsorship chairs

Murat Akbacak(Apple)

Eman Fituri (QCRI)

Jimmy Kunzmann(Amazon)

SLTC Liaison

Kyu Jeong Han(ASAAP)

Publication chairs

Alberto Abad Gareta(INESC-ID/IST)

Erfan Loweimi(King'sCollegeLondon)

 

Invited speaker chair

Andrew Rosenberg(Google)

Nancy F.Chen(Institute for Infocomm Research(I2R)

 

Challenge&demonstrationchairs

Imed Zitouni(Google)

Jon Barker(University of Sheffield)

Seokhwan Kim(Amazon)

Peter Bell(University of Edinburgh)

 

SpeechHackathonOrganizingCommittee

Thomas Schaaf(3M|M*Modal)

Gianni DiCaro(Carnegie Mellon University-Qatar)

Shinji Watanabe - ESPNET (Carnegie Mellon University)

Paola Garcia-KALDI/K2(Johns Hopkins University)

Mirco Ravanelli - Speech Brain (Université de Montréal)

Alessandra Cervone(Amazon)

Mus'ab Husaini(QCRI)

 

Advisoryboard

Jim Glass (MIT)

Kemal Oflazer (Carnegie Mellon University - Qatar)

Helen Meng (The Chinese University of Hong Kong)

Haizhou Li (National University of Singapore)

Local Arrangements chairs

Shammur Chowdhury(QCRI)

Houda Bouamor(Carnegie Mellon University-Qatar)

Student coordinator

Berrak Sisman(Singapore University of  Technology and Design)

Back  Top

3-3 Other Events
3-3-1(2022-08-17) 13th Nordic Prosody Conference, Sonderborg, Denmark

13th Nordic Prosody Conference

-------------------------------

Sonderborg, Denmark

17-19 August 2022

Topic: Applied and Multimodal Prosody Research



The 13th edition of the Nordic Prosody (NP) conference series is proudly hosted by Centre of Industrial Electronics (CIE) and the CIE Acoustics Lab at the University of Southern Denmark on science campus Alsion, Sonderborg, Denmark. The conference will be held 17-19 August 2022.



Website: https://event.sdu.dk/13rdnordicprosody/main



The University of Southern Denmark (SDU) is both the third-largest and the third-oldest Danish university. Since the introduction of the ranking systems in 2012, the University of Southern Denmark has consistently been ranked as one of the top 50 young universities in the world by both the Times Higher Education World University Rankings and the QS World University Rankings. The SDU is also among the top 20 universities in Scandinavia.



Nordic Prosody conferences take place every 4 years. The first one was in Lund in 1978, organized by Eva Gårding, Gösta Bruce and Robert Bannert. The 12th Nordic Prosody was in 2016 in Trondheim, Norway. The conference series focuses on the forms and functions of prosodic patterns in Nordic languages and in languages spoken all around the Baltic Sea coastline. Contributions on all the various aspects of phonetics, phonology, and speech typology are welcome. Papers presenting new corpora, methods, or devices can be submitted as well. We also encourage researchers from neighboring disciplines like (second-language) pedagogy, acoustics, human-machine interaction, and voice pathology to submit contributions to the conference.



Keynote Speakers

- David House (KTH Stockholm, Sweden) & Gilbert Ambrazaitis (Linnaeus University, Sweden): The multimodal nature of prominence

- Wim van Dommelen (Norwegian University of Science and Technology, Trondheim, Norway): Interactions of segmental and prosodic parameters

- Nicolai Pharao (Copenhagen University, Denmark): Processing prosody – recognizing speakers and recognizing words



Scientific Areas (not exhaustive)

  • Phonology and phonetics of prosody

  • Production and perception of prosody

  • Acquisition, learning and teaching of prosody

  • Assessment of prosody and measures to evaluate prosodic skills

  • Non-native aspects in the production and perception of prosody

  • Socio-phonetic aspects of prosody

  • Prosodic variation in continuous speech

  • Speech processing of and for prosodic patterns

  • Prosody in and for talking machines and robots

  • Psychological and neural mechanisms of prosody

  • Pathologies and therapies related to prosody

  • Resources related to prosody: Speech corpora, annotation systems, tools & methods

  • Cross-linguistic and cross-cultural aspects of prosody

  • Mutimodal signals related to prosody

  • Applied prosody



Conference proceedings will be published in a peer-reviewed volume by Sciendo/de Gruyter.


Important dates:
05 June 2022 Abstract submission deadline (through EasyChair)

01 July 2022 Notification of acceptance
31 July 2022 Early bird registration deadline
17-19 August 2022 13th Nordic Prosody Conference, Sonderborg, Denmark

01 November 2022 Full-paper submission deadline

Registrations are made through the conference website under “Sign up”. Abstracts should be submitted under the following EasyChair link: https://easychair.org/conferences/?conf=np13 . Please find the formatting guidelines or template for both the abstract and the full paper below or on the “Download” subpage. Please note that the full-paper after the conference submission is not made through EasyChair. To submit your full paper, please use this link to Sciendo here: https://sciendo.com/book/9788366675728



We wish all of you a good start into the new lecture term.

The NP13 organizing committee

Back  Top

3-3-2(2022-08-24) 11th International Workshop on Haptic and Audio Interaction Design (HAID 2022) , hybrid mode, Queen Mary University, London, UK



We are pleased to announce the 11th International Workshop on Haptic and Audio Interaction Design (HAID 2022) that will be held in hybrid mode on 24–26 August 2022 at Queen Mary University of London in London, UK.

 

https://haid2022.qmul.ac.uk/

 

For questions please contact us at haid2022@qmul.ac.uk

 

To keep in touch and up to date on news related to the HAID community, please join our Google group (https://groups.google.com/g/haid-community) and follow us on twitter (https://twitter.com/HAID_conference).

 

 

===== Call for papers & demos =====

 

We invite submissions reporting on completed research and live demos at the intersection of haptics, audio, and human-computer interaction. We also welcome papers focusing on one of these fields with applications to the others.

 

We particularly welcome contributions, both theoretical and empirical, in the following areas: 

- Design of audio and haptic feedback for health & wellbeing

- Musical haptics & augmented instruments

 

Contributions in the following areas are also welcome:

- Novel haptic and auditory interfaces

- Perception & evaluation of multimodal and cross-sensory interactions

- Design principles for haptic and auditory interfaces

- Design of audio and haptic feedback for entertainment and creative applications 

- Affective and semiotic roles of haptics and audio in interaction

- Leveraging auditory-tactile correspondences in interaction design

 

Important dates

Papers - submission: 29 April 2022 (11:59 PM AoE)

Papers - acceptance: 30 May 2022 (11:59 PM AoE)

Papers - camera ready: 10 June 2022 (11:59 PM AoE)

Demos - submission: 10 June 2022 (11:59 PM AoE)

Demos - acceptance: 30 June 2022 (11:59 PM AoE)

 

More information is available here:

https://haid2022.qmul.ac.uk/submissions/papers-and-demos/ 

 

 

===== Call for work in progress =====

 

HAID 2022 seeks work in progress submissions, which describe recently completed work or highly relevant results of work in progress in all areas related to haptics, audio, and interaction design. 

 

We particularly encourage work in progress submissions from “newcomers”—master students or early-stage PhD Students without a supervisor who is part of the HAID community, especially from underrepresented groups.

 

Important dates:

Submission deadline: 10 June 2022 (11:59 PM AoE)

Acceptance notification: 30 June 2022 (11:59 PM AoE)

 

More information is available here:

https://haid2022.qmul.ac.uk/submissions/work-in-progress/ 

 

 

===== Call for workshops =====

 

We also invite proposals for workshops to be held during the 1st day of the conference. These proposals may take the form of theoretical or hands-on tutorials on specific HAID topics or forums for discussion and development.

 

Important dates:

Proposal submission: 8 April 2022 (11:59 PM AoE)

Acceptance notification: 15 April 2022 (11:59 PM AoE)

 

More information is available here:

https://haid2022.qmul.ac.uk/submissions/workshops/ 

Back  Top

3-3-3(2022-09-05) CfP Twenty-fifth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2022), Brno, Czech Republic
TSD 2022 - SECOND CALL FOR PAPERS
         *********************************************************

Twenty-fifth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2022)
              Brno, Czech Republic, 5-9 September 2022
                    http://www.tsdconference.org/


THE SUBMISSION DEADLINE has been EXTENDED to:

    April 22 2022 ............ Submission of full papers


KEYNOTE SPEAKERS

    Eneko Agirre, Universidad del País Vasco, Spain
    Anna Rogers, University of Copenhagen, Denmark

The conference is organized by the Faculty of Informatics, Masaryk
University, Brno, and the Faculty of Applied Sciences, University of
West Bohemia, Pilsen.  The conference is supported by International
Speech Communication Association.

Venue: Brno, Czech Republic


SUBMISSION OF PAPERS

Authors are invited to submit a full paper not exceeding 12 pages
formatted in the LNCS style (including references). Those accepted
will be presented either orally or as posters. The decision about the
presentation format will be based on the recommendation of the
reviewers.  Proceedings papers do not distinguish the presentation format.
The authors are asked to submit their papers using the on-line form
accessible from the conference website.

Papers submitted to TSD 2022 must not be under review by any other
conference or publication during the TSD review cycle, and must not be
previously published or accepted for publication elsewhere.

As reviewing will be blind, the paper should not include the authors'
names and affiliations. Furthermore, self-references that reveal the
author's identity, e.g., 'We previously showed (Smith, 1991) ...',
should be avoided. Instead, use citations such as 'Smith previously
showed (Smith, 1991) ...'.  Papers that do not conform to the
requirements above are subject to be rejected without review.

The authors are strongly encouraged to write their papers in TeX or
LaTeX formats. These formats are necessary for the final versions of
the papers that will be published in the Springer Lecture Notes.

The paper format for review has to be in the PDF format with all
required fonts included. Upon notification of acceptance, presenters
will receive further information on submitting their camera-ready and
electronic sources (for detailed instructions on the final paper
format see https://www.tsdconference.org/tsd2022/paper_instr.html).

Authors are also invited to present actual projects, developed
software or interesting material relevant to the topics of the
conference.  The presenters of demonstrations should provide an
abstract not exceeding one page. The demonstration abstracts will not
appear in the conference proceedings.


TSD SERIES

TSD series evolved as a prime forum for interaction between researchers in
both spoken and written language processing from all over the world.
Proceedings of TSD form a book published by Springer-Verlag in their
Lecture Notes in Artificial Intelligence (LNAI) series. TSD Proceedings
are regularly indexed by Thomson Reuters Conference Proceedings Citation
Index/Web of Science.  Moreover, LNAI series are listed in all major
citation databases such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.


CALL for SATELLITE WORKSHOP PROPOSALS
https://www.tsdconference.org/tsd2022/conf_workshop_proposals.html

The TSD 2022 conference will be accompanied by one-day satellite workshops
or project meetings with organizational support by the TSD organizing
committee. The organizing committee can arrange for a meeting room at the
conference venue and prepare a workshop proceedings as a book with ISBN by
a local publisher. The workshop papers that will pass also the standard TSD
review process will appear in the Springer proceedings.  Each workshop is
a subject to proposal that should be sent via the proposal submission form
or discussed via the contact e-mail tsd2022@tsdconference.org ahead of the
respective deadline.


TOPICS

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual,
    text and spoken corpora, large web corpora, disambiguation,
    specialized lexicons, dictionaries)

    Speech Recognition (multilingual, continuous, emotional
    speech, handicapped speaker, out-of-vocabulary words,
    alternative way of feature extraction, new models for
    acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech
    (morphological and syntactic analysis, synthesis and
    disambiguation, multilingual processing, sentiment analysis,
    credibility analysis, automatic text labeling, summarization,
    authorship attribution)

    Speech and Spoken Language Generation (multilingual, high
    fidelity speech synthesis, computer singing)

    Semantic Processing of Text and Speech (information
    extraction, information retrieval, data mining, semantic web,
    knowledge representation, inference, ontologies, sense
    disambiguation, plagiarism detection, fake news detection)

    Integrating Applications of Text and Speech Processing
    (machine translation, natural language understanding,
    question-answering strategies, assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in
    dialogues)

    Multimodal Techniques and Modelling (video processing, facial
    animation, visual speech synthesis, user modelling, emotions
    and personality modelling)

Papers on processing of languages other than English are strongly
encouraged.


PROGRAM COMMITTEE

    Elmar Noeth, Germany (general chair)
    Rodrigo Agerri, Spain
    Eneko Agirre, Spain
    Vladimir Benko, Slovakia
    Archna Bhatia, USA
    Jan Cernocky, Czech Republic
    Simon Dobrisek, Slovenia
    Kamil Ekstein, Czech Republic
    Karina Evgrafova, Russia
    Yevhen Fedorov, Ukraine
    Volker Fischer, Germany
    Darja Fiser, Slovenia
    Lucie Flek, Germany
    Bjorn Gamback, Norway
    Radovan Garabik, Slovakia
    Alexander Gelbukh, Mexico
    Louise Guthrie, USA
    Tino Haderlein, Germany
    Jan Hajic, Czech Republic
    Eva Hajicova, Czech Republic
    Yannis Haralambous, France
    Hynek Hermansky, USA
    Jaroslava Hlavacova, Czech Republic
    Ales Horak, Czech Republic
    Eduard Hovy, USA
    Denis Jouvet, France
    Maria Khokhlova, Russia
    Aidar Khusainov, Russia
    Daniil Kocharov, Russia
    Miloslav Konopik, Czech Republic
    Ivan Kopecek, Czech Republic
    Valia Kordoni, Germany
    Evgeny Kotelnikov, Russia
    Pavel Kral, Czech Republic
    Siegfried Kunzmann, USA
    Nikola Ljubesic, Croatia
    Natalija Loukachevitch, Russia
    Bernardo Magnini, Italy
    Oleksandr Marchenko, Ukraine
    Vaclav Matousek, Czech Republic
    Roman Moucek, Czech Republic
    Agnieszka Mykowiecka, Poland
    Hermann Ney, Germany
    Joakim Nivre, Sweden
    Juan Rafael Orozco-Arroyave, Colombia
    Karel Pala, Czech Republic
    Maciej Piasecki, Poland
    Josef Psutka, Czech Republic
    James Pustejovsky, USA
    German Rigau, Spain
    Paolo Rosso, Spain
    Leon Rothkrantz, The Netherlands
    Anna Rumshisky, USA
    Milan Rusko, Slovakia
    Pavel Rychly, Czechia
    Mykola Sazhok, Ukraine
    Odette Scharenborg, The Netherlands
    Pavel Skrelin, Russia
    Pavel Smrz, Czech Republic
    Petr Sojka, Czech Republic
    Georg Stemmer, Germany
    Marko Robnik Sikonja, Slovenia
    Marko Tadic, Croatia
    Jan Trmal, Czechia
    Tamas Varadi, Hungary
    Zygmunt Vetulani, Poland
    Aleksander Wawer, Poland
    Pascal Wiggers, The Netherlands
    Marcin Wolinski, Poland
    Alina Wroblewska, Poland
    Victor Zakharov, Russia
    Jerneja Zganec Gros, Slovenia


FORMAT OF THE CONFERENCE

The conference program will include presentation of invited papers,
oral presentations, and poster/demonstration sessions. Papers will be
presented in plenary or topic oriented sessions. Hopefully, after
two COVID years, the conference can be planned to be held on-site again.

Social events including a trip in the vicinity of Brno will allow
for additional informal interactions.


IMPORTANT DATES

April 22 2022 ............ Submission of full papers
June 5 2022 .............. Notification of acceptance
June 15 2022 ............. Final papers (camera ready) and registration
August 8 2022 ............ Submission of demonstration abstracts
August 15 2022 ........... Notification of acceptance for
                           demonstrations sent to the authors
September 5-9 2022 ...... Conference date

Submission of abstracts serves for better organization of the review
process only - please submit your abstract as soon as possible.
For the actual review a full paper submission is necessary.

The accepted conference contributions will be published in Springer
proceedings that will be made available to participants at the time
of the conference.


OFFICIAL LANGUAGE

The official language of the conference is English.


ACCOMMODATION

The organizing committee will arrange discounts on accommodation in
the 4-star hotel at the conference venue. The current prices of the
accommodation will be available at the conference website.


ADDRESS

All correspondence regarding the conference should be
addressed to
   
    Ales Horak, TSD 2022
    Faculty of Informatics, Masaryk University
    Botanicka 68a, 602 00 Brno, Czech Republic
    phone: +420-5-49 49 18 63
    fax: +420-5-49 49 18 20
    email: tsd2022@tsdconference.org

The official TSD 2022 homepage is: http://www.tsdconference.org/


LOCATION

Brno is the second largest city in the Czech Republic with a
population of almost 400.000 and is the country's judiciary and
trade-fair center. Brno is the capital of South Moravia, which is
located in the south-east part of the Czech Republic and is known
for a wide range of cultural, natural, and technical sights.
South Moravia is a traditional wine region. Brno had been a Royal
City since 1347 and with its six universities it forms a cultural
center of the region.

Brno can be reached easily by direct flights from London, and by
trains or buses from Vienna (150 km) or Prague (230 km).

For the participants with some extra time, nearby places may
also be of interest.  Local ones include: Brno Castle now called
Spilberk, Veveri Castle, the Old and New City Halls, the
Augustine Monastery with St. Thomas Church and crypt of Moravian
Margraves, Church of St.  James, Cathedral of St. Peter & Paul,
Cartesian Monastery in Kralovo Pole, the famous Villa Tugendhat
designed by Mies van der Rohe along with other important
buildings of between-war Czech architecture.

For those willing to venture out of Brno, Moravian Karst with
Macocha Chasm and Punkva caves, battlefield of the Battle of
three emperors (Napoleon, Russian Alexander and Austrian Franz
- Battle by Austerlitz), Chateau of Slavkov (Austerlitz),
Pernstejn Castle, Buchlov Castle, Lednice Chateau, Buchlovice
Chateau, Letovice Chateau, Mikulov with one of the largest Jewish
cemeteries in Central Europe, Telc - a town on the UNESCO
heritage list, and many others are all within easy reach.
Delete | Reply | Reply to All | Forward | Redirect | View Thread | Blacklist | Whitelist | Message S
Back  Top

3-3-4(2022-09-06) CfP Voices in and out of Place: Misplaced, Replaced and Interlaced Voices (on line conference)
Call for Conference Papers and Creative Practice:
Voices in and out of Place: Misplaced, Displaced, Replaced and Interlaced Voices
6-7 September 2022
EXTENDED Submissions Deadline: 13 May 2022
The International Centre for Music Studies at Newcastle University (ICMuS) is hosting the second biennial on-line Vicarious Vocalities, Simulated Songs conference in collaboration with the Centre for Interdisciplinary Voice Studies, now celebrating its tenth year. The theme of this year’s conference is “Voices in and out of Place: Misplaced, Displaced, Replaced and Interlaced Voices”, and is intended to cover both new and longstanding questions around the location or place of the voice with regard to the body AND equally perennial debates around the voice in relation to geographical and temporal place and space.
We are seeking contributions from scholars and creative practitioners engaging with the following questions:
Where/when is the voice? Where/when does it come from and where and when does it go? Where and when does it “belong”? And what happens to it once it leaves the vocal apparatus and travels free?
Through a cross-disciplinary, cross-cultural, and cross-practice approach, this edition of Vicarious Vocalities, Simulated Songs will consider how the voice travels within and without the boundaries of human bodies, societies, geographies, temporalities, and technologies. What are some of the ways we can attempt to map such migrations? What routes can be traced when we consider voices in and out of place?
We talk of finding one’s voice, losing one’s voice; of voices usurped, silenced, or extinct. As listeners, we inhabit songs and other vocalities and form relationships with voices, ‘moving in’ with them, living with them, ageing with them, perhaps leaving them. Such concerns have been amplified, as it were, by the growth of mechanical and digital reproduction and its attendant ability to carry voices out of place and out of time. Nevertheless, the fascination with vocal relocations has a long, multi-cultural history.
Keynote Speaker:
Naomi André
Professor, Department of Afroamerican and African Studies
University of Michigan
Respondent:
Katherine Meizel
Professor, College of Musical Arts
Bowling Green State University
We welcome contributions of all types, including:
  • 20-minute papers
  • panels
  • works in progress
  • artistic works or performances
  • digital posters; etc.
Possible areas of inquiry/creative practice might include, but are by no means limited to:
  • Immigrations and displacements: the voices of home, the voice of the other, nostalgic re-imaginings
  • Vocal appropriations: sampling, plunderphonics, musique concrète, lipsynching
  • Voicing nature: field recordings, anthropomorphic representations, extinctions
  • ‘Thrown’ voices: ventriloquism in practice and culture, the power of the “unseen, offscreen” voice (the acousmatic voice)
  • Matching the “ideal voice” to the “ideal body”: vocal mismatches and television reveals; overdubbing, ghost singing, playback singing across cultures; pop culture recording practices
  • Machine voices: artificial intelligence, “robot” voices, autotune and digital manipulations
  • Vocal fractures: aphasia, dementia, voice rehabilitation, interventions, simulations
  • Voices in deterrence and torture: The “Guantanamo Playlist”, crowd control, public policing
  • Distant encounters and raising the dead: telephony, recording, vocal dis/re-embodiment, spirit mediums
  • God-speak: scripture, glossolalia, the shift of sacred singing styles voicing the secular
  • Vocal overcrowding: social media, “silent” or “vocal” majorities, podcasting
  • Case studies: particular performers, performances, or techniques
Submissions Format:
  • Please send a 300-word abstract describing your paper or creative project along with a 100-word biographical statement to Drs Merrie Snell and Richard Elliott at vicariousvocalities@gmail.com by 13 May 2022.
  • We will aim to respond to submissions by the beginning of June 2022.
  • We encourage submissions from underrepresented individuals, communities, and identities.
Conference Format: Following previous editions of Vicarious Vocalities, Simulated Songs, the conference will take place online. It will run for two days—6 and 7 September 2022—in plenary mode, with a keynote and an evening event showcasing creative work (audio-visual works, performances, remixes, etc.). A website is also planned to showcase digital posters and creative work.
Vicarious Vocalities, Simulated Songs is organised in collaboration with the Centre for Interdisciplinary Voice Studies. The Centre promotes cross-disciplinary and cross-cultural research and discussion through symposia, publications (including the Journal of Interdisciplinary Voice Studies), performances, and other activities including the Vicarious Vocalities on-line biennial conference, the first of which was hosted by University of Portsmouth in September 2020. The Centre was founded by Ben Macpherson (University of Portsmouth, UK) and Konstantinos Thomaidis (University of Exeter, UK) in 2012.
For further information on the Centre and its activities, see: https://interdisciplinaryvoicestudies.wordpress.com/category/uncategorized/
You can view the Journal of Interdisciplinary Voice Studies here: https://www.intellectbooks.com/journal-of-interdisciplinary-voice-studies


Back  Top

3-3-5(2022-09-06) TSD 2022 - CALL FOR DEMONSTRATIONS and PARTICIPATION

*********************************************************
           TSD 2022 - CALL FOR DEMONSTRATIONS and PARTICIPATION
        *********************************************************

Twenty-fifth International Conference on TEXT, SPEECH and DIALOGUE (TSD 2022)
              Brno, Czech Republic, 6-9 September 2022
                    http://www.tsdconference.org/


SUBMISSION OF DEMONSTRATION ABSTRACTS

Authors are invited to present actual projects, developed software and
hardware or interesting material relevant to the topics of the
conference. The authors of the demonstrations should provide the
abstract not exceeding one page as plain text. The submission must be
made using the online form available at the conference www pages.

The accepted demonstrations will be presented during a special
Demonstration Session (see the Demo Instructions at
www.tsdconference.org).  Demonstrators can present their contribution
with their own notebook with an Internet connection provided by the
organisers or the organisers can prepare a PC computer with multimedia
support for demonstrators.

The demonstration abstracts will not appear in the Proceedings of TSD
2022, they will be published electronically at the conference website.


IMPORTANT DATES

August 8 2022 ............ Submission of demonstration abstracts
August 15 2022 ........... Notification of acceptance for
                           workshop papers and demonstrations
                           sent to the authors
September 6-9 2022 ....... Conference dates


KEYNOTE SPEAKERS

    Eneko Agirre, Universidad del País Vasco, Spain
    Anna Rogers, University of Copenhagen, Denmark

The conference is organized by the Faculty of Informatics, Masaryk
University, Brno, and the Faculty of Applied Sciences, University of
West Bohemia, Pilsen.  The conference is supported by International
Speech Communication Association.

Venue: Brno, Czech Republic


TUTORIAL

The conference program will be supplemented with a hands-on tutorial

    Speech recognition on the edge
    Prof. Daniel Hromada; Hyungjoong Kim

    Keywords: Automatic Speech Recognition; Speech Command Classification;
        DeepSpeech; RaspberryPi; NVIDIA Jetson; Python; Linux; Websockets

    During this workshop, participants will be introduced to diverse ways
    how speech-to-text (STT) inferences can be realized on non-cloud, local
    (i.e. edge-computing) architectures.  Participants will acquire
    knowledge and competence concerning intricacies and nuances of
    execution of two different types of ASR systems (DeepSpeech and Random
    Forests) on three different hardware architectures (e.g.
    RaspberryPiZero (armv6); RaspberryPi 4 (armv7 without CUDA) and NVIDIA
    Jetson Xavier (armv8 / aarch64 with CUDA).  Thus, the hands-on workshop
    participants will experience the transformation of all three hardware
    platforms into a low-cost local STT inference engine.


TSD SERIES

TSD series evolved as a prime forum for interaction between researchers in
both spoken and written language processing from all over the world.
Proceedings of TSD form a book published by Springer-Verlag in their
Lecture Notes in Artificial Intelligence (LNAI) series. TSD Proceedings
are regularly indexed by Thomson Reuters Conference Proceedings Citation
Index/Web of Science.  Moreover, LNAI series are listed in all major
citation databases such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.


TOPICS

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual,
    text and spoken corpora, large web corpora, disambiguation,
    specialized lexicons, dictionaries)

    Speech Recognition (multilingual, continuous, emotional
    speech, handicapped speaker, out-of-vocabulary words,
    alternative way of feature extraction, new models for
    acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech
    (morphological and syntactic analysis, synthesis and
    disambiguation, multilingual processing, sentiment analysis,
    credibility analysis, automatic text labeling, summarization,
    authorship attribution)

    Speech and Spoken Language Generation (multilingual, high
    fidelity speech synthesis, computer singing)

    Semantic Processing of Text and Speech (information
    extraction, information retrieval, data mining, semantic web,
    knowledge representation, inference, ontologies, sense
    disambiguation, plagiarism detection, fake news detection)

    Integrating Applications of Text and Speech Processing
    (machine translation, natural language understanding,
    question-answering strategies, assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in
    dialogues)

    Multimodal Techniques and Modelling (video processing, facial
    animation, visual speech synthesis, user modelling, emotions
    and personality modelling)

Papers on processing of languages other than English are strongly
encouraged.


PROGRAM COMMITTEE

    Elmar Noeth, Germany (general chair)
    Rodrigo Agerri, Spain
    Eneko Agirre, Spain
    Vladimir Benko, Slovakia
    Archna Bhatia, USA
    Jan Cernocky, Czech Republic
    Simon Dobrisek, Slovenia
    Kamil Ekstein, Czech Republic
    Karina Evgrafova, Russia
    Yevhen Fedorov, Ukraine
    Volker Fischer, Germany
    Darja Fiser, Slovenia
    Lucie Flek, Germany
    Bjorn Gamback, Norway
    Radovan Garabik, Slovakia
    Alexander Gelbukh, Mexico
    Louise Guthrie, USA
    Tino Haderlein, Germany
    Jan Hajic, Czech Republic
    Eva Hajicova, Czech Republic
    Yannis Haralambous, France
    Hynek Hermansky, USA
    Jaroslava Hlavacova, Czech Republic
    Ales Horak, Czech Republic
    Eduard Hovy, USA
    Denis Jouvet, France
    Maria Khokhlova, Russia
    Aidar Khusainov, Russia
    Daniil Kocharov, Russia
    Miloslav Konopik, Czech Republic
    Ivan Kopecek, Czech Republic
    Valia Kordoni, Germany
    Evgeny Kotelnikov, Russia
    Pavel Kral, Czech Republic
    Siegfried Kunzmann, USA
    Nikola Ljubesic, Croatia
    Natalija Loukachevitch, Russia
    Bernardo Magnini, Italy
    Oleksandr Marchenko, Ukraine
    Vaclav Matousek, Czech Republic
    Roman Moucek, Czech Republic
    Agnieszka Mykowiecka, Poland
    Hermann Ney, Germany
    Joakim Nivre, Sweden
    Juan Rafael Orozco-Arroyave, Colombia
    Karel Pala, Czech Republic
    Maciej Piasecki, Poland
    Josef Psutka, Czech Republic
    James Pustejovsky, USA
    German Rigau, Spain
    Paolo Rosso, Spain
    Leon Rothkrantz, The Netherlands
    Anna Rumshisky, USA
    Milan Rusko, Slovakia
    Pavel Rychly, Czechia
    Mykola Sazhok, Ukraine
    Odette Scharenborg, The Netherlands
    Pavel Skrelin, Russia
    Pavel Smrz, Czech Republic
    Petr Sojka, Czech Republic
    Georg Stemmer, Germany
    Marko Robnik Sikonja, Slovenia
    Marko Tadic, Croatia
    Jan Trmal, Czechia
    Tamas Varadi, Hungary
    Zygmunt Vetulani, Poland
    Aleksander Wawer, Poland
    Pascal Wiggers, The Netherlands
    Marcin Wolinski, Poland
    Alina Wroblewska, Poland
    Victor Zakharov, Russia
    Jerneja Zganec Gros, Slovenia


FORMAT OF THE CONFERENCE

The conference program will include presentation of invited papers,
oral presentations, and poster/demonstration sessions. Papers will be
presented in plenary or topic oriented sessions. After two COVID years, the
conference is planned to be held on-site again.

Social events including a trip in the vicinity of Brno will allow
for additional informal interactions.


OFFICIAL LANGUAGE

The official language of the conference is English.


ACCOMMODATION

The organizing committee has arranged discounts on accommodation in
the 4-star hotel at the conference venue. The current prices of the
accommodation are available at the conference website.


ADDRESS

All correspondence regarding the conference should be
addressed to
   
    Ales Horak, TSD 2022
    Faculty of Informatics, Masaryk University
    Botanicka 68a, 602 00 Brno, Czech Republic
    phone: +420-5-49 49 18 63
    fax: +420-5-49 49 18 20
    email: tsd2022@tsdconference.org

The official TSD 2022 homepage is: http://www.tsdconference.org/


LOCATION

Brno is the second largest city in the Czech Republic with a
population of almost 400.000 and is the country's judiciary and
trade-fair center. Brno is the capital of South Moravia, which is
located in the south-east part of the Czech Republic and is known
for a wide range of cultural, natural, and technical sights.
South Moravia is a traditional wine region. Brno had been a Royal
City since 1347 and with its six universities it forms a cultural
center of the region.

Brno can be reached easily by direct flights from London, and by
trains or buses from Vienna (150 km) or Prague (230 km).

For the participants with some extra time, nearby places may
also be of interest.  Local ones include: Brno Castle now called
Spilberk, Veveri Castle, the Old and New City Halls, the
Augustine Monastery with St. Thomas Church and crypt of Moravian
Margraves, Church of St.  James, Cathedral of St. Peter & Paul,
Cartesian Monastery in Kralovo Pole, the famous Villa Tugendhat
designed by Mies van der Rohe along with other important
buildings of between-war Czech architecture.

For those willing to venture out of Brno, Moravian Karst with
Macocha Chasm and Punkva caves, battlefield of the Battle of
three emperors (Napoleon, Russian Alexander and Austrian Franz
- Battle by Austerlitz), Chateau of Slavkov (Austerlitz),
Pernstejn Castle, Buchlov Castle, Lednice Chateau, Buchlovice
Chateau, Letovice Chateau, Mikulov with one of the largest Jewish
cemeteries in Central Europe, Telc - a town on the UNESCO
heritage list, and many others are all within easy reach.

Back  Top

3-3-6(2022-09-18) Call for tutorials Interspeech 2022, Incheon, Korea

Call for papers:
Inclusive and Fair Speech Technologies
Special Session at Interspeech 2022
https://sites.google.com/view/fair-speech-interspeech22/

September 18 - 22, 2022
Incheon, South Korea

______________

Automatic speech recognition systems have dramatically improved over the past decade thanks to the advances brought by deep learning and the effort on large-scale data collection. For some groups of people, however, speech technology works less well, maybe because their speech patterns differ significantly from the standard dialect (e.g., because of regional accent), because of intra-group heterogeneity (e.g., speakers of regional African American dialects; second-language learners; and other demographic aspects such as age, gender, or race), or because the speech pattern of each individual in the group exhibits a large variability (e.g., people with severe disabilities).

The goal of this special session is (1) to discuss these biases and propose methods for making speech technologies more useful to heterogeneous populations and (2) to increase academic and industry collaborations to reach these goals.

Such methods include:
  • analysis of performance biases among different social/linguistic groups in speech technology,
  • new methods to mitigate these differences,
  • new approaches for data collection, curation and coding,
  • new algorithmic training criteria,
  • new methods for envisioning speech technology task descriptions and design criteria.
Moreover, the special session aims to foster cross-disciplinary collaboration between fairness and personalization research, which has the potential to both improve customer experiences and algorithm fairness. The special session will bring experts from both fields to advance the cross-disciplinary study between fairness and personalization, e.g., fairness-aware personalization.

______________

Important Dates:
Paper submission deadline: March 21, 2022, 23:59, AoE.
Paper update deadline: March 28, 2022, 23:59, AoE.
Interspeech conference dates: September 18 to 22, 2022.

______________

Author Guidelines:

Papers have to be submitted following the same schedule and procedure as the main conference, and will undergo the same review process.
Submit your papers here: www.softconf.com/m/interspeech2022 and select the 'Submission Topic' 14.5 to include your work in this session.

 ______________

Organizers:

Laurent Besacier, Naver Labs Europe, France
Keith Burghardt, USC Information Sciences Institute, USA
Alice Coucke, Sonos Inc., France
Mark Allan Hasegawa-Johnson, University of Illinois, USA
Peng Liu, Amazon Alexa, USA
Anirudh Mani, Amazon Alexa, USA
Mahadeva Prasanna, IIT Dharwad, India
Priyankoo Sarmah, IIT Guwahati, India
Odette Scharenborg, Delft University of Technology, the Netherlands
Tao Zhang, Amazon Alexa, USA 
 
-- 
Alice Coucke
Head of Machine Learning Research | Sonos Voice Experience
Back  Top

3-3-8(2022-09-18) CfP Special session on Trustworthy Speech Processing at Interspeech 22

 

We're organizing a special session on Trustworthy Speech Processing at Interspeech 22, inviting papers exploring topics from trustworthy machine learning (such as privacy, fairness, bias mitigation, etc.) within the realm of speech processing. Can you please include this CFP in your next newsletter, and forward to any relevant lists if possible?

 

Best,

Organizing team:

Anil Ramakrishna, Amazon Inc.

Shrikanth Narayanan, University of Southern California

Rahul Gupta, Amazon Inc.

Isabel Trancoso, University of Lisbon

Rita Singh, Carnegie Mellon University

 

======================================================================

Call for papers:

Trustworthy Speech Processing (TSP)

Special Session at Interspeech 22

trustworthyspeechprocessing.github.io

September 18 - 22, 2022

Incheon, South Korea

 

Given the ubiquity of Machine Learning (ML) systems and their relevance in daily lives, it is important to ensure private and safe handling of data alongside equity in human experience. These considerations have gained considerable interest in recent times under the realm of Trustworthy ML. Speech processing in particular presents a unique set of challenges, given the rich information carried in linguistic and paralinguistic content including speaker trait, interaction and state characteristics. This special session on Trustworthy Speech Processing (TSP) was created to bring together new and experienced researchers working on trustworthy ML and speech processing.

 

We invite novel and relevant submissions from both academic and industrial research groups showcasing theoretical and empirical advancements in TSP. Topics of interest cover a variety of papers centered on speech processing, including (but not limited to):

 

* Differential privacy

* Federated learning

* Ethics in speech processing

* Model interpretability

* Quantifying & mitigating bias in speech processing

* New datasets, frameworks and benchmarks for TSP

* Discovery and defense against emerging privacy attacks

* Trustworthy ML in applications of speech processing like ASR

 

======================================================================

Important Dates:

Paper submission deadline: March 21, 2022, 23:59, Anywhere on Earth.

Paper update deadline: March 28, 2022, 23:59, Anywhere on Earth.

Author notification: June 13, 2022.

Interspeech conference dates: September 18 to 22, 2022.

 

======================================================================

Author Guidelines:

Submissions for TSP will follow the same schedule and procedure as the main conference. Submit your papers here: www.softconf.com/m/interspeech2022 (select option #14.13 as the submission topic).

 

Back  Top

3-3-9(2022-09-18) CfP Spoofing-Aware Speaker Verification Challenge, Incheon, Korea
We are thrilled to announce the Spoofing-Aware Speaker Verification Challenge. While spoofing countermeasures, promoted within the sphere of the ASVspoof challenge series, can help to protect reliability in the face of spoofing, they have been developed as independent subsystems for a fixed  ASV subsystem. Better performance can be expected when countermeasures and ASV subsystems are both optimised to operate in tandem.
 
The first Spoofing-Aware Speaker Verification (SASV) 2022 challenge aims to encourage the development of original solutions involving, but not limited to:
 
- back-end fusion of pre-trained automatic speaker verification and pre-trained audio spoofing countermeasure subsystems;
- integrated spoofing-aware automatic speaker verification systems that have the capacity to reject both non-target and spoofed trials.
 
We warmly invite the submission of general contributions in this direction. The Interspeech 2022 Spoofing-Aware Automatic Speaker Verification special session also incorporates a challenge ? SASV 2022. Participants are encouraged to evaluate their solutions using the SASV benchmarking framework which comprises a common database, protocol, and evaluation metric. Further details and resources can be found on the SASV challenge website.
 
 
Schedule:
 -January 19, 2022: Release of the evaluation plan

- March 10, 2022: Results submission
- March 14, 2022: Release of participant ranks
- March 21, 2022: INTERSPEECH Paper submission deadline
- March 28, 2022: INTERSPEECH Paper update deadline
- June 13, 2022: INTERSPEECH Author notification

- September 18-22, 2022: SASV challenge special session at INTERSPEECH
 
To participate, please register your interest at https://forms.gle/htoVnog34kvs3as56
 
For further information, please contact us at sasv.challenge@gmail.com.
 
We are looking forward to hearing from you.
 
Kind regards,
 
The SASV Challenge 2022 Organisers
Back  Top

3-3-10(2022-09-23) 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd Challenge Workshop (INTERSPEECH 2022 satellite event)

  CALL FOR PAPERS

=========================================

  • 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd VoicePrivacy Challenge Workshop (INTERSPEECH 2022 satellite event)
  • https://symposium2022.spsc-sig.org/

  • May 18 – Long paper submission deadline 
  • June 15 – VoicePrivacy Challenge and short paper submission deadline
  • September 23-24 – Workshop

=========================================

The second edition of the Symposium on Security & Privacy in Speech Communication (SPSC), this year combined with the 2nd VoicePrivacy Challenge workshop, focuses on speech and voice through which we express ourselves. As speech communication can be used to command virtual assistants to transport emotion or to identify oneself, the symposium tries to give answers to the question on how we can strengthen security and privacy for speech representation types in user-centric human/machine interaction? The symposium therefore sees that interdisciplinary exchange is in high demand and aims to bring together researchers and practitioners across multiple disciplines including signal processing, cryptography, security, human-computer interaction, law, and anthropology. The SPSC Symposium addresses interdisciplinary topics.

For more details, see https://symposium2022.spsc-sig.org/home/_cfp/CFP_SPSC-Symposium-2022.pdf

=== Important dates

 

  • May 18 – Long paper submission deadline 
  • June 15 – VoicePrivacy Challenge paper submission deadline 
  • June 15 – Short paper submission deadline 
  • July 1 – Author notification
  • July 31 – VoicePrivacy Challenge results and system description submission deadline
  • September 5 – Final paper submission
  • September 23-24 – SPSC Symposium at the Incheon National University, Korea

===  Topics of interest

Topics regarding the technical perspective include:

  • Privacy-preserving speech communication
    • speech recognition and spoken language processing
    • speech perception, production and acquisition
    • speech synthesis and spoken language generation
    • speech coding and enhancement
    • speaker and language identification
    • phonetics, phonology and prosody
    • paralinguistics in speech and language
  • Cybersecurity
    • privacy engineering and secure computation
    • network security and adversarial robustness
    • mobile security
    • cryptography
    • biometrics
  • Machine learning
    • federated learning
    • disentangled representations
    • differential privacy
  • Natural language processing
    • web as corpus, resources and evaluation
    • tagging, summarization, syntax and parsing
    • question answering, discourse and pragmatics
    • machine translation and document analysis
    • linguistic theories and psycholinguistics
    • inference of semantics and information extraction

Topics regarding the humanities’ view include:

  • Human-computer interfaces (speech as medium)
    • usable security and privacy
    • ubiquitous computing
    • pervasive computing and communication
    • cognitive science
  • Ethics and law
    • privacy and data protection
    • media and communication
    • identity management
    • electronic mobile commerce
    • data in digital media
  • Digital humanities
    • acceptance and trust studies
    • user experience research on practice
    • co-development across disciplines
    • data-citizenship
    • situated ethics
    • STS perspectives

We welcome contributions on related topics, as well as progress reports, project disseminations, theoretical discussions, and “work in progress”. There is also a dedicated PhD track. In addition, participants from academia, industry, and public institutions, as well as interested students are welcome to attend the conference without having to make their own contribution. All accepted submissions will appear in the conference proceedings published in ISCA Archive. The workshop will take place mainly in person at the Incheon National University (Korea) with additional support of participants willing to join virtually.

===  Submission

Papers intended for the SPSC Symposium should be up to eight pages of text. The length should be chosen appropriately to present the topic to an interdisciplinary community. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author’s kit. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English.

===  Reviews

At least three single-blind reviews will be provided, and we aim to obtain feedback from interdisciplinary experts for each submission. The review criteria applied to regular papers will be adapted for VoicePrivacy Challenge papers to be more in keeping with systems descriptions and results.

Back  Top

3-3-11(2022-09-23) Voice Privacy Challenge, Incheon, South Korea

VoicePrivacy 2022 Challenge
http://www.voiceprivacychallenge.org

  • Challenge paper submission deadline: 15 June 2022
  • Results and paper description submission deadline: 31 July 2022
  • ISCA workshop (Incheon, Korea in conjunction with INTERSPEECH 2022): 23-24 September 2022

---------------------------------------------------------------------------------------------------------------

 Dear colleagues,

registration for the VoicePrivacy 2022 Challenge continues!

The task is to develop a voice anonymization system for speech data which conceals the speaker’s voice identity while protecting linguistic content, paralinguistic attributes, intelligibility and naturalness.

The VoicePrivacy 2022 Challenge Evaluation Plan: https://www.voiceprivacychallenge.org/vp2020/docs/VoicePrivacy_2022_Eval_Plan_v1.0.pdf

VoicePrivacy 2022 is the second edition, which will culminate in a joint workshop held in Incheon, Korea in conjunction with INTERSPEECH 2022 and in cooperation with the ISCA Symposium on Security and Privacy in Speech Communication.

Registration: Participate | VoicePrivacy 2022

Subscription: Participate | VoicePrivacy 2022

Back  Top

3-3-12(2022-10-10) 5th International Workshop on Multimedia Content Analysis in Sports (MMSports'22) @ ACM Multimedia, Lisbon, Portugal

Call for Papers

-------------------

5th International Workshop on Multimedia Content Analysis in Sports (MMSports'22) @ ACM Multimedia, October 10-14, 2022, Lisbon, Portugal

 

We'd like to invite you to submit your paper proposals for the 5th International Workshop on Multimedia Content Analysis in Sports to be held in Lisbon, Portugal together with ACM Multimedia 2022. The ambition of this workshop is to bring together researchers and practitioners from different disciplines to share ideas on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:

 

- annotation and indexing in sports 

- tracking people/ athlete and objects in sports

- activity recognition, classification, and evaluation in sports

- event detection and indexing in sports

- performance assessment in sports

- injury analysis and prevention in sports

- data driven analysis in sports

- graphical augmentation and visualization in sports

- automated training assistance in sports

- camera pose and motion tracking in sports

- brave new ideas / extraordinary multimodal solutions in sports

- personal virtual (home) trainers/coaches in sports

- datasets in sports 

 

Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of their paper. All papers will undergo the same review process and review period.

 

Please refer to the workshop website for further information: 

http://mmsports.multimedia-computing.de/mmsports2022/index.html

 

IMPORTANT DATES

Submission Due:                            July 4, 2022 

Acceptance Notification:             July 29, 2022

Camera Ready Submission:         August 21, 2022 

Workshop Date:                            TBA; either Oct 10 or Oct 14, 2022

 

 

Challenges

--------------

This year, MMSports proposes a competition where participants will compete over State-of-the-art problems applied to real-world sport specific data. The competition is made of 4 individual challenges, each of which is sponsored by SportRadar with a US$1,000.00 prize. Each challenge comes with a toolkit describing the task, the dataset and metrics on which participants will be evaluated:

The challenges are hosted on EvalAI where participants will submit the prediction of their model on an evaluation set for which labels are kept secret. Leaderboards will display the ranking for each challenge.

More information can be found at http://mmsports.multimedia-computing.de/mmsports2022/challenge.html

 

ACM MMSports’22 Chairs: Thomas Moeslund, Rainer Lienhart and Hideo Saito

 

 

Back  Top

3-3-13(2022-10-12) French Cross-Domain Dialect Identification (FDI) task @VarDial2022, Gyeongju, South Korea
We are organizing the French Cross-Domain Dialect Identification (FDI) task @VarDial2022.
 

In the 2022 French Dialect Identification (FDI) shared task, participants have to train a model on news samples collected from a set of publication sources and evaluate it on news samples collected from a different set of publication sources. Not only the sources are different, but also the topics. Therefore, participants have to build a model for a cross-domain 4-way classification by dialect task, in which a classification model is required to discriminate between the French (FH), Swiss (CH), Belgian (BE) and Canadian (CA) dialects across different news samples. The corpus is divided into training, validation and test, such that the publication sources and topics are distinct across splits. The training set contains 358,787 samples. The development set is composed of 18,002 samples. Another set of 36,733 samples are kept for the final evaluation.

Important Dates:
- Training set release: May 20, 2022
- Test set release: June 30, 2022
- Submissions due: July 6, 2022

Link: https://sites.google.com/view/vardial-2022/shared-tasks#h.mj5vivaubw8r
 
We invite you to participate!
Have a nice day.
Back  Top

3-3-14(2022-10-17) Cf Posters papers, ISMAR 2022, Singapore

CALL FOR POSTER PAPERS
https://ismar2022.org/call-for-posters/

OVERVIEW
ISMAR 2022, the premier conference for Augmented Reality (AR) and Mixed Reality (MR), will be held on October 17-21, 2022. Note that ISMAR offers three distinct calls for journals, papers, and posters. This call is for submission to the conference poster track. See the ISMAR website for more information.

IMPORTANT DEADLINES
Poster Paper Submission Deadline: June 20th, 2022
Notification: August 15th, 2022
Camera-ready version: August 22nd, 2022
ISMAR is responding to the increasing commercial and research activities related to AR and MR and Virtual Reality (VR) by continuing the expansion of its scope over the past several years. ISMAR 2022 will cover the full range of technologies encompassed by the MR continuum, from interfaces in the real world to fully immersive experiences.

ISMAR invites research contributions that advance AR/VR/MR technologies, collectively referred to as eXtended Reality (XR) technologies, and are relevant to the community.

The poster session is one of the highlights of ISMAR, where the community engages in a discussion about the benefits and challenges of XR in other research and application domains.

SUBMISSION DETAILS
We welcome paper submissions from 2-6 pages, including the list of references. Poster papers will be reviewed on the basis of an extended abstract, which can contain smaller contributions, late breaking developments or in-progress work.

Please note that ISMAR further distinguishes poster papers of 2 pages to be non-archival and poster papers of 3 or more pages to be archival. This means that authors of 2 page poster papers can resubmit longer versions of their accepted work, with additional details, at later ISMAR conferences.

All submissions will be accepted or rejected as poster papers.
All accepted papers will be archived in the IEEE Xplore digital library
At least one of the authors must register and attend the ISMAR 2022 conference to present the poster
The Poster track is co-aligned to the Conference Paper track, which may accept some Conference Paper submissions as posters based on the merit of their contribution. Detailed submission and review guidelines are available on the conference website and the Guidelines section.

Note that All paper submissions must be in English.

ISMAR 2022 Poster Chairs
poster_chairs@ismar2022.org

Back  Top

3-3-15(2022-11-07) 24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru, India (updated)

*********************************************************************
ICMI 2022
24th ACM International Conference on Multimodal Interaction

https://icmi.acm.org/2022/
7-11 Nov 2022, Bengaluru, India
*********************************************************************

CALL FOR DEMONSTRATION AND EXHIBIT PAPERS

We invite you to submit your proposals for demonstrations and exhibits to be held during the 24th ACM International Conference on Multimodal Interaction (ICMI 2022), located in Bengaluru (Bangalore), India, November 7-11th, 2022. This year’s conference theme is “Intelligent and responsible Embodied Conversational Agents (ECAs) in the multi-lingual real world”.

 

The ICMI 2022 Demonstrations & Exhibits session is intended to provide a forum to showcase innovative implementations, systems and technologies demonstrating new ideas about interactive multimodal interfaces. It can also serve as a platform to introduce commercial products. Proposals may be of two types: demonstrations or exhibits. The main difference is that demonstrations include a 2-3 page paper in one column, which will be included in the ICMI main proceedings, while the exhibits only need to include a brief outline (no more than two pages in one column; not included in ICMI proceedings). We encourage both the submission of early research prototypes and interesting mature systems. In addition, authors of accepted regular research papers may be invited to participate in the demonstration sessions as well.

 
Demonstration Submission

Please submit a 2-3 page description of the demonstration in a single column format through the main ICMI conference management system (https://new.precisionconference.com/sigchi). Demonstration description(s) must be in PDF format, according to the ACM conference format, of no more than 3 pages in a single column format including references. For instructions and links to the templates, please see the Guidelines for Authors.

 

Demonstration proposals should include a description with photographs and/or screen captures of the demonstration. Demonstration submissions should be accompanied by a video of the proposed demo (no larger than 200MB), which can include a set of slides (no more than 10 slides) in PowerPoint format.

 

The demo and exhibit paper submissions are not anonymous. However, all ACM rules and guidelines related to paper submission should be followed (e.g. plagiarism, including self-plagiarism).

 

The demonstration submissions will be peer reviewed, according to the following criteria: suitability as a demo, scientific or engineering feasibility of the proposed demo system, application, or interactivity, alignment with the conference focus, potential to engage the audience, and overall quality and presentation of the written proposal. Authors are encouraged to address such criteria in their proposals, along with preparing the papers mindful of the quality and rigorous scientific expectations of an ACM publication.

 

The demo program will include the accepted proposals and may additionally include invited demos from among regular papers accepted for presentation at the conference. Please note that the accepted demos will be included in the ICMI main proceedings.

 
Exhibit Submission

Exhibit proposals should be submitted following the same guidelines, formatting, and due dates as for demonstration proposals. Exhibit proposals must be shorter in length (up to two pages), and are more suitable for showcasing mature systems. Like demos, submissions for exhibits should be accompanied by a video (no larger than 200MB), which can include a set of slides (no more than 10 slides) in PowerPoint format. Exhibits will not have a paper published in the ICMI 2022 proceedings.

 
Facilities

Once accepted, demonstrators and video presenters will be provided with a table, poster board, power outlet and wireless (shared) Internet. Demo and video presenters are expected to bring with them everything else needed for their demo and video presentations, such as hardware, laptops, sensors, PCs, etc. However, if you have special requests such as a larger space, special lighting conditions and so on, we will do our best to arrange them.

 

Important note for the authors: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

 
Attendance

At least one author of all accepted Demonstrations and Exhibits submissions must register for and attend the conference, including the conference demonstrations and exhibits session(s).

 
Important Dates

Submission of demo and exhibit proposals

July 26, 2022

Demo and exhibit notification of acceptance

August 1, 2022

Submission of demo final papers

August 15, 2022

 

For the latest information re: author guidelines, important dates, facilities, attendance requirements, etc., please see https://icmi.acm.org/2022/call-for-demonstrations-and-exhibits/.

 

For any further questions, contact the Demonstrations and Exhibits co-chairs: Dan Bohus and Ramanathan Subramanian (icmi2022-demo-chairs@acm.org).

 

 

 

Back  Top

3-3-16(2022-11-07) Doctoral Consortium at ICMI- Call for Contributions

Doctoral Consortium - Call for Contributions 

The goal of the ICMI Doctoral Consortium (DC) is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces. 

Who should apply? 

While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply. 

Why should you attend? 

The DC provides an opportunity to build a social network that includes the cohort of DC students, senior students, recent graduates, and senior mentors. Not only is this an opportunity to get feedback on research directions, it is also an opportunity to learn more about the process and to understand what comes next. We aim to connect you with a mentor who will give specific feedback on your research. We specifically aim to create an informal setting where students feel supported in their professional development. 

Submission Guidelines 

Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials: 

  1. Extended Abstract: Please describe your PhD research plan and progress as a seven-page paper in a single column format. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. Your extended abstract should follow the same outline, details, and format of the ICMI short papers. The submissions will not be anonymous. In particular, it should cover: 
    • The key research questions and motivation of your research; 
    • Background and related work that informs your research; 
    • A statement of hypotheses or a description of the scope of the technical problem; 
    • Your research plan, outlining stages of system development or series of studies; 
    • The research approach and methodology; 
    • Your results to date (if any) and a description of remaining work; 
    • A statement of research contributions to date (if any) and expected contributions of your PhD work; 
  1. Advisor Letter: A one-page letter of nomination from the student's PhD advisor. This letter is not a letter of support. Instead, it should focus on the student's PhD plan and how the Doctoral Consortium event might contribute to the student's PhD training and research. 
  2. CV: A two-page curriculum vitae of the student. 

All materials should be prepared in a single PDF format and submitted through the ICMI submission system. 


Important Dates 

Submission deadline 

July 1, 2022 

Notifications 

July 29, 2022 

Camera-ready 

August 12, 2022 

Review Process 

The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women and other underrepresented groups are especially encouraged to apply.


Attendance 

All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their work as a short talk or as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines will be available after the camera-ready deadline.

Process 

Questions? 

For more information and updates on the ICMI 2022 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website (https://icmi.acm.org/2022/doctoral-consortium/

For further questions, contact the Doctoral Consortium co-chairs: 

  • Theodora Chaspari (chaspari@tamu.edu) 
  • Tanaya Guha (tanaya.guha@glasgow.ac.uk) 
 
Back  Top

3-3-17(2022-11-07) International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives” VASTI 2022 @ICMI 2022

International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives”
VASTI 2022

co-located with the ICMI 2022

https://vasti2022.mobileds.de/

Scope
The workshop is encouraging an interdisciplinary exchange of researchers focussing on multimodal interactions in the wide range of group research aspects, linguistic and acoustic perspectives, as well as dialogue management in relation to speech based systems, e.g. voice  assistants. Regarding the mentioned research communities, the interdisciplinary collaboration between these research communities is currently rather loose. Therefore, the workshop aims on bridging the three research communities based on shared interests and provides a platform for detailed discussions.

Generally, human beings are usually interactive and socially engaged, often communicating in either dyads or groups. During such interactions, each communication partner (human or technical) is providing a variety of information, including general information/content, as well as personal and relational information. These communication aspects are in the focus of group interactions or multi-party interactions. In social sciences, areas such as investigating interpersonal relationships of the group members and the dynamics of group interaction, cohesion, and performance, are observed. These aspects are nowadays also considered in computer sciences and linguistics using automatic analyses. Unfortunately, these communities have started to collaborate only recently. In this sense, the workshop aims to strengthen these collaborations.

However, especially the advent of voice assistants and the increased distributions provide an optimal testbed to combine the three communities and encourage interdisciplinary discussions highlighting contributions from each research perspective. Especially, since at a certain level of development, current voice assistance systems seem to set the expectation of human-like linguistic flexibility and complexity, which is disproportionate to the actual skills of the artificial agent. To enable future technical systems to act as a conversational partner and act naturally in group or dyadic multimodal interactions, it is necessary to combine knowledge and research approaches on the fundamental mechanisms of human speech perception and speech production from a cognitive, psycholinguistic point of view as well as insights from interactional linguistics, discourse analysis and sociolinguistics with phonetics, phonology and prosody in the context of spoken interaction with machines. This should be further combined with aspects of dialogue management and social signal processing to allow a holistic consideration of the users and using groups.


Topics

  • Voice Assistant Technology

  • Multimodal Interactions in Teams

  • Automatic Team Analyses
  • Multi-Party Interaction
  • Linguistics in Voice Assistance
  • Linguistics in Teams
  • Communication in Teams
  • Human Speech Perception
  • Multimodal Perception

Important dates:
Submission deadline: July 28, 2022
Notification of Acceptance: August 12, 2022
Camera ready: August 19, 2022
Workshop date: November 7, 2022

Submissions
Prospective authors are invited to submit full papers (8 pages, 7+1
reference) and short papers (5 pages, 4+1 reference) following the ICMI
2022 Latex or Word templates, as specified by ICMI 2021. All submissions
should be anonymous. Accepted papers will be published in the conference
proceedings.

Venue
in conjunction with ICMI 2022 (intended to be onsite)

Organizers
Ronald Böck, University Magdeburg, Germany
Daniel Duran, Leibniz Zentrum für Angewandte Sprachwissenschaft, Germany
Ingo Siegert, University Magdeburg, Germany


Back  Top

3-3-18(2022-11-07) Late-breaking results @24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru, India
CALL FOR LATE-BREAKING RESULTS
 
We invite you to submit your papers to the late-breaking results track of the 24th ACM International Conference on Multimodal Interaction (ICMI 2022), located in Bengaluru (Bangalore), India, November 7-11th, 2022. 

Based on the success of the LBR in the past ICMI 18-21, the ACM International Conference on Multimodal Interaction (ICMI) 2022 continues soliciting submissions for the special venue titled Late-Breaking Results (LBR). The goal of the LBR venue is to provide a way for researchers to share emerging results at the conference. Accepted submissions will be presented in a poster session at the conference, and the extended abstract will be published in the new Adjunct Proceedings (Companion Volume) of the main ICMI Proceedings. Like similar venues at other conferences, the LBR venue is intended to allow sharing of ideas, getting formative feedback on early-stage work, and furthering collaborations among colleagues.
  • Highlights 
    • Submission deadline: August 12th, 2022
    • Notifications: September 9th, 2022
    • Camera-ready deadline: September 16th, 2022
    • Conference Dates: November 7-11, 2022
    • Submission format: Anonymized, short paper (seven-page paper in a single column format, not including references), 
    • following the submission guidelines.
    • Selection process: Peer-Reviewed
    • Presentation format: Participation in the conference poster session
    • Proceedings: Included in Adjunct Proceedings and ACM Digital Library
    • LBR Co-chairs: Fabien Ringeval and Nikita Soni
Late-Breaking Work (LBR) submissions represent work such as preliminary results, provoking and current topics, novel experiences or interactions that may not have been fully validated yet, cutting-edge or emerging work that is still in exploratory stages, smaller-scale studies, or in general, work that has not yet reached a level of maturity expected for the full-length main track papers. However, LBR papers are still expected to bring a contribution to the ICMI community, commensurate with the preliminary, short, and quasi-informal nature of this track.

 

Accepted LBR papers will be presented as posters during the conference. This provides an opportunity for researchers to receive feedback on early-stage work, explore potential collaborations, and otherwise engage in exciting thought-provoking discussions about their work in an informal setting that is significantly less constrained than a paper presentation. The LBR (posters) track also offers those new to the ICMI community a chance to share their preliminary research as they become familiar with this field.
Late-Breaking Results papers appear in the Adjunct Proceedings (Companion Volume) of the ICMI Proceedings. Copyright is retained by the authors, and the material from these papers can be used as the basis for future publications as long as there are “significant” revisions from the original, as per the ACM and ACM SIGCHI policies.
Extended Abstract: An anonymized short paper, seven-page paper in a single column format, not including references. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. The paper should be submitted in PDF format and through the ICMI submission system in the “Late-Breaking Results” track. Due to the tight publication timeline, it is recommended that authors submit a very nearly finalized paper that is as close to camera-ready as possible, as there will be a very short timeframe for preparing the final camera-ready version and no deadline extensions can be granted.
Anonymization: Authors are instructed not to include author information in their submission. In order to help reviewers judge the situation of the LBR to prior work, authors should not remove or anonymize references to their own prior work. Instead, we recommend that authors obscure references to their own prior work by referring to it in the third person during submission. If desired, after acceptance, such references can be changed to first-person.
LBRs will be evaluated to the extent that they are presenting work still in progress, rather than complete work which is under-described in order to fit into the LBR format. The LBR track will undergo an external peer review process. Submissions will be evaluated by a number of factors including (1) the relevance of the work to ICMI, (2) the quality of the submission, and (3) the degree to which it “fits” the LBR track (e.g., in-progress results). More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Authors should clearly justify how the proposed ideas can bring some measurable breakthroughs compared to the state-of-the-art of the field.
Similar rules for registration and attendance will be applied for authors of LBR papers as for regular papers. Further information will be available later on and given on the main page of the website.
For more information and updates on the ICMI 2022 Late-Breaking Results (LBR), visit the LBR page of the main conference website: https://icmi.acm.org/2022/index.php?id=cflbr.
For further questions, contact the LBR co-chairs (Fabien Ringeval and Nikita Soni) at icmi2022-latebreaking-chairs@acm.org

Back  Top

3-3-19(2022-11-14) IberSPEECH 2022, Grenada, Spain

IberSPEECH’2022 will be held in Granada (Spain), from 14 to 16 November 2022. The IberSPEECH event –the sixth of its kind using this name– brings together the XII Jornadas en Tecnologías del Habla and the VIII Iberian SLTech Workshop events.

Following with the tradition of previous editions, IberSPEECH’2022 will be a three-day event, planned to promote interaction and discussion. There will be a wide variety of activities: technical papers presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, entrepreneurship & discussion panels, and awards to the best thesis and papers.

You can find all the information of this first call for papers at http://iberspeech2022.ugr.es/. More details will be available soon on this website.

 

Important Dates

Regular Papers

Submission opens: June 17th, 2022
Submission abstract deadline: July 8th, 2022
Submission full paper deadline: July 15th, 2022
Paper notifications sent: September 16th, 2022
Camera-ready paper due: September 25th, 2022

Special Sessions (Projects, Demos, PhD Theses & Entrepreneurship)

Proposals: October 7th, 2022
Full-Paper: October 14th, 2022

Albayzin Evaluations 2022

Registration deadline: September 4th, 2022.
Release of the evaluation data: September 5th, 2022.
Submission deadline (including system summary): October 16th, 2022.
Results distribution to the participants: October 24th, 2022.
Paper submission deadline: October 30th, 2022.
Iberspeech 2022 Albayzin Evaluations special session in Granada: November 15th, 2022.

Conference IBERSPEECH’2022

Registration opens: September 16th, 2022
Early Registration ends: October 18th, 2022
Conference Starts: Monday, November 14th, 2022
Conference Ends: Wednesday, November 16th, 2022

At least one author of each accepted paper must complete a full early registration for the conference.

 

Topics

The topics of interest regarding processing Iberian languages include, but are not limited to:

1. Speech technology and applications

1. Spoken language generation and synthesis

2. Speech and speaker recognition

3. Speaker diarization

4. Speech enhancement

5. Speech processing and acoustic event detection

6. Spoken language understanding

7. Spoken language interfaces and dialogue systems

8. Systems for information retrieval and information extraction from speech

9. Systems for speech translation

10. Applications for aged and handicapped persons

11. Applications for learning and education

12. Emotions recognition and synthesis

13. Language and dialect identification

14. Applications for learning and education

15. Speech technology and applications: other topics

 

2. Human speech production, perception, and communication

1. Linguistic, mathematical, and psychological models of language

2. Phonetics, phonology, and morphology

3. Pragmatics, discourse, semantics, syntax, and lexicon

4. Paralinguistic and nonlinguistic cues (e.g. emotion and expression)

5. Human speech production, perception, and communication: other topics

 

3. Natural language processing (NLP) and applications

1. Natural language generation and understanding

2. Retrieval and categorization of natural language documents

3. Summarization mono and multi-document

4. Extraction and annotation of entities, relations, and properties

5. Creation and processing of ontologies and vocabularies

6. Machine learning for natural language processing

7. Shallow and deep semantic analysis: textual entailment, anaphora resolution, paraphrasing

8. Multi-lingual processing for information retrieval and extraction

9. Natural language processing for information retrieval and extraction

10. Natural language processing (NLP) and applications: other topics

 

4. Speech, Language and Multimodality

1. Multimodal Interaction

2. Sign Language

3. Handwriting recognition

4. Speech, Language and Multimodality: other topics

 

5. Resources, standardization, and evaluation

1. Spoken language resources, annotation, and tools

2. Spoken language evaluation and standardization

3. NLP resources, annotation, tools

4. NLP evaluation and standarization

5. Multimodal resources, annotation and tools

6. Multimodal evaluation and standardization

7. Resources, standardization, and evaluation: other topics

 

Paper Submission

Regular Papers must be written in English and submission will be online. Papers must be submitted in PDF following the Interspeech 2022 format. Paper length is 4 pages and one additional page for references. There is no minimum length requirement for papers of the special sessions project review and demos.

Upon acceptance, at least one author will be required to register (full & early) and present the paper at the conference.

 

--
Back  Top

3-3-20(2022-11-14)) CfP SPECOM 2022, Gurugram, India (updated)

********************************************************************

SPECOM-2022 – CALL FOR PAPERS

********************************************************************

 The conference is relocated in India. 

 

********************************************************************

 

SPECOM-2022 – FINAL CALL FOR PAPERS

 

********************************************************************

 

 

 

24th International Conference on Speech and Computer (SPECOM-2022)

 

November 14-16, 2022, KIIT Campus, Gurugram, India

 

Web: www.specom.co.in

 

 

 

ORGANIZER

 

The conference is organized by KIIT College of Engineering as a hybrid event both in Gurugram/New Delhi, India and online.

 

 

 

CONFERENCE TOPICS

 

SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, and human-computer interaction:

 

Affective computing

 

Audio-visual speech processing

 

Corpus linguistics

 

Computational paralinguistics

 

Deep learning for audio processing

 

Feature extraction

 

Forensic speech investigations

 

Human-machine interaction

 

Language identification

 

Multichannel signal processing

 

Multimedia processing

 

Multimodal analysis and synthesis

 

Sign language processing

 

Speaker recognition

 

Speech and language resources

 

Speech analytics and audio mining

 

Speech and voice disorders

 

Speech-based applications

 

Speech driving systems in robotics

 

Speech enhancement

 

Speech perception

 

Speech recognition and understanding

 

Speech synthesis

 

Speech translation systems

 

Spoken dialogue systems

 

Spoken language processing

 

Text mining and sentiment analysis

 

Virtual and augmented reality

 

Voice assistants

 

 

 

OFFICIAL LANGUAGE

 

The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.

 

 

 

FORMAT OF THE CONFERENCE

 

The conference program will include presentations of invited talks, oral presentations, and poster/demonstration sessions.

 

 

 

 

 

 

 

SUBMISSION OF PAPERS

 

Authors are invited to submit full papers of 8-14 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2022

 

 

 

PROCEEDINGS

 

SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.

 

 

 

IMPORTANT DATES (extended!)

 

August 16, 2022 .................. Submission of full papers

 

September 13, 2022 ........... Notification of acceptance

 

September 20, 2022 ........... Camera-ready papers

 

September 27, 2022 ........... Early registration

 

November 14-16, 2022 .......Conference dates

 

 

 

GENERAL CHAIR/CO-CHAIR

 

Shyam S Agrawal - KIIT, Gurugram

 

Amita Dev - IGDTUW, Delhi

 

 

 

TECHNICAL CHAIR/CO-CHAIRS

 

S.R. Mahadeva Prasanna - IIT Dharwad

 

Alexey Karpov - SPC RAS

 

Rodmonga Potapova - MSLU

 

K. Samudravijaya - KL University

 

 

 

CONTACTS

 

All correspondence regarding the conference should be addressed to SPECOM 2022 Secretariat

 

E-mail: specomkiit@kiitworld.in

 

Web: www.specom.co.in

 

 

 

 



 

 

 


Back  Top

3-3-21(2022-12-13) CfP 18th Australasian International Conference on Speech Science and Technology (SST2022), Canberra, Australia
SST2022: CALL FOR PAPERS

The Australasian Speech Science and Technology Association is pleased to call for papers for the 18th Australasian International Conference on Speech Science and Technology (SST2022). SST is an international interdisciplinary conference designed to foster collaboration among speech scientists, engineers, psycholinguists, audiologists, linguists, speech/language pathologists and industrial partners.

? Location: Canberra, Australia (remote participation options will also be available)
? Dates: 13-16 December 2022
? Host Institution: Australian National University
? Deadline for tutorial and special session proposals: 8 April 2022
? Deadline for submissions: 17 June 2022
? Notification of acceptance: 31 August 2022
? Deadline for upload of revised submissions: 16 September 2022
? Website: www.sst2022.com

Submissions are invited in all areas of speech science and technology, including:

? Acoustic phonetics
? Analysis of paralinguistics in speech and language
? Applications of speech science and technology
? Audiology
? Computer assisted language learning
? Corpus management and speech tools
? First language acquisition
? Forensic phonetics
? Hearing and hearing impairment
? Languages of Australia and Asia-Pacific (phonetics/phonology)
? Low-resource languages
? Pedagogical technologies for speech
? Second language acquisition
? Sociophonetics
? Speech signal processing, analysis, modelling and enhancement
? Speech pathology
? Speech perception
? Speech production
? Speech prosody, emotional speech, voice quality
? Speech synthesis and speech recognition
? Spoken language processing, translation, information retrieval and summarization
? Speaker and language recognition
? Spoken dialog systems and analysis of conversation
? Voice mechanisms, source-filter interactions

We are inviting two categories of submission: 4-page papers (for oral or poster presentation, and publication in the proceedings), and 1-page detailed abstracts (for poster presentation only). Please follow the author instructions in preparing your submission.

We also invite proposals for tutorials, as 3-hour intensive instructional sessions to be held on the first day of the conference. In addition, we welcome proposals for special sessions, as thematic groupings of papers exploring specific topics or challenges. Interdisciplinary special sessions are particularly encouraged.

For any queries, please contact sst2022conf@gmail.com.
Back  Top

3-3-22(2023-01-04) SIVA workshop @ Waikoloa Beach Marriott Resort, Hawaii, USA.

CALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne UniversitéCALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)52023-01)04)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université

Back  Top

3-3-23(2023-01-04) Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23), Waikoloa, Hawaii
CALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université
Back  Top

3-3-24(2023-01-09) Advanced Language Processing School (ALPS) , Grenoble, France

FIRST CALL FOR PARTICIPATION

 

We are opening the registration for the third Advanced Language Processing School (ALPS), co-organized by University Grenoble Alpes and Naver Labs Europe.

*Target Audience*

This is a winter school covering advanced topics in NLP, and we are primarily targeting doctoral students and advanced (research) masters. A few slots will also be reserved for academics and persons working in research-heavy positions in industry.

*Characteristics*

Advanced lectures by first class researchers. A (virtual) atmosphere that fosters connections and interaction. A poster session for attendees to present their work, gather feedback and brainstorm future work ideas.

*Speakers*

The current list of speakers is: Kyunghyun Cho (New York University, USA); Yejin Choi (University of Washington and Allen Institute for AI, USA); Dirk Hovy (Bocconi University, Italia); Colin Raffel (University of North Carolina at Chapel Hill, Hugging Face, USA); Lucia Specia (Imperial College, UK), François Yvon (LISN/CNRS, France).  

*Application*

To apply to this winter school, please follow the instructions at http://alps.imag.fr/index.php/application/ . The deadline for applying is Sept 16th, and we will notify acceptance on October 3rd.

*Contact*

Website: http://alps.imag.fr/

E-mail: alps@univ-grenoble-alpes.fr

Back  Top

3-3-25(2023-06-12)) 13th International Conference on Multimedia Retrieval, Thessaloniki, Greece

ICMR2023 – ACM International Conference on Multimedia Retrieval

Back  Top

3-3-26(2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA

MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
http://www.mldm.de
 
When    Jul 16, 2023 - Jul 21, 2023
Where    New York, USA
Submission Deadline    Jan 15, 2023
Notification Due    Mar 18, 2023
Final Version Due    Apr 5, 2023
Categories:    machine learning   data mining   pattern recognition   classification
 
Call For Papers
MLDM 2023
18th International Conference on Machine Learning and Data Mining
July 15 - 19, 2023, New York, USA

The Aim of the Conference
The aim of the conference is to bring together researchers from all over the world who deal with machine learning and data mining in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome.

Chair
Petra Perner Institute of Computer Vision and Applied Computer Sciences IBaI, Germany

Program Committee
Piotr Artiemjew University of Warmia and Mazury in Olsztyn, Poland
Sung-Hyuk Cha Pace Universtity, USA
Ming-Ching Chang University of Albany, USA
Mark J. Embrechts Rensselaer Polytechnic Institute and CardioMag Imaging, Inc, USA
Robert Haralick City University of New York, USA
Adam Krzyzak Concordia University, Canada
Chengjun Liu New Jersey Institute of Technology, USA
Krzysztof Pancerz University Rzeszow, Poland
Dan Simovici University of Massachusetts Boston, USA
Agnieszka Wosiak Lodz University of Technology, Poland
more to be annouced...


Topics of the conference

Paper submissions should be related but not limited to any of the following topics:

Association Rules
Audio Mining
Autoamtic Semantic Annotation of Media Content
Bayesian Models and Methods
Capability Indices
Case-Based Reasoning and Associative Memory
case-based reasoning and learning
Classification & Prediction
classification and interpretation of images, text, video
Classification and Model Estimation
Clustering
Cognition and Computer Vision
Conceptional Learning
conceptional learning and clustering
Content-Based Image Retrieval
Control Charts
Decision Trees
Design of Experiment
Desirabilities
Deviation and Novelty Detection
Feature Grouping, Discretization, Selection and Transformation
Feature Learning
Frequent Pattern Mining
https://www.icphs2023.org/, where it is also possible to register for email notifications concerning the congress.

 

Contact: icphs2023@guarant.cz

 

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA