ISCA - International Speech
Communication Association


ISCApad Archive  »  2022  »  ISCApad #292  »  Events

ISCApad #292

Wednesday, October 05, 2022 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2023-08-20) Call for Special Sessions/Challenges Interspeech 2023 Dublin, Ireland

Call for Special Sessions/Challenges

We are delighted to announce the launch of the Call for Special Sessions/Challenges for INTERSPEECH 2023 in Dublin, Ireland in August 2023.

 

Submissions are encouraged covering interdisciplinary topics and/or important new emerging areas of interest related to the main conference topics. Submissions related to the special focus of the conference’s theme, Inclusive Spoken Language Science and Technology, are particularly welcome. Apart from supporting a particular theme, special sessions may also have a different format from a regular session.

 

Check out https://www.interspeech2023.org/special-sessions-challenges/ for more information, including how to submit your proposal.

 

Important Dates

Proposals of special sessions/challenges due       9th November 2022

Notification of pre-selection       14th December 2022

Final list of special sessions         17th May 2023

For all updates on INTERSPEECH 2023, refer to our website at https://www.interspeech2023.org/

Back  Top

3-1-2(2023-08-20) INTERSPEECH 2023 First Call for Papers

INTERSPEECH 2023 First Call for Papers

 

INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasise interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications.

 

INTERSPEECH 2023 will take place in Dublin, Ireland, from August 20-24th 2023 and will feature oral and poster sessions, plenary talks by internationally renowned experts, tutorials, special sessions and challenges, show & tell, exhibits, and satellite events.

 

The theme of INTERSPEECH 2023 is Inclusive Spoken Language Science and Technology – Breaking Down Barriers. Whilst it is not a requirement to address this theme, we encourage submissions that: report performance metric distributions in addition to averages; break down results by demographic; employ diverse data; evaluate with diverse target users; report barriers that could prevent other researchers adopting a technique, or users from benefitting. This is not an exhaustive list, and authors are encouraged to discuss the implications of the conference theme for their own work.

 

Papers are especially welcome from authors who identify as being under-represented in the speech science and technology community, whether that is because of geographical location, economic status, race, age, gender, sexual orientation or any other characteristic.

Paper Submission

INTERSPEECH 2023 seeks original and innovative papers covering all aspects of speech science and technology. The working language of the conference is English, so papers must be written in English. The paper length is up to four pages in two columns with an additional page for references only. Submitted papers must conform to the format defined in the author’s kit provided on the conference website (https://www.interspeech2023.org/), and may optionally be accompanied by multimedia files. Authors must declare that their contributions are original and that they have not submitted their papers elsewhere for publication. Papers must be submitted electronically and will be evaluated through rigorous peer review on the basis of novelty and originality, technical correctness, clarity of presentation, key strengths, and quality of references. The Technical Programme Committee will decide which papers to include in the conference programme using peer review as the primary criterion, with secondary criteria of addressing the conference theme, and diversity across the programme as a whole.

Scientific areas and topics

 

INTERSPEECH 2022 embraces a broad range of science and technology in speech, language and communication, including – but not limited to – the following topics:

 

  • Speech Perception, Production and Acquisition

  • Phonetics, Phonology, and Prosody

  • Paralinguistics in Speech and Language

  • Analysis of Conversation

  • Speech, Voice, and Hearing Disorders

  • Speaker and Language Identification

  • Speech and Audio Signal Analysis

  • Speech Coding and Enhancement

  • Speech Synthesis

  • Spoken Language Generation

  • Automatic Speech Recognition

  • Spoken Dialogue and Conversational AI Systems

  • Spoken Language Translation, Information Retrieval, Summarization

  • Technologies and Systems for New Applications

  • Resources and Evaluation

Technical Program Committee Chairs

Simon King - University of Edinburgh, UK

Kate Knill - University of Cambridge, UK

Petra Wagner - University of Bielefeld, Germany

Contact

For all queries relating to this call for papers, please email:

tpc-chairs@interspeech2023.org

Important dates

Paper Submission Deadline March 1st, 2023

Paper Update Deadline March 8th, 2023

Paper Acceptance Notification May 17th, 2023

Final Paper Upload and Paper Presenter Registration Deadline June 1st, 2023

 

 

 

 

Back  Top

3-1-3(2023-08-20) Interspeech 2023, Dublin, Ireland

, ISCA has reached the decision to hold INTERSPEECH-2023 in Dublin, Ireland (Aug. 20-24, 2023)

Back  Top

3-1-4(2024-07-02) 12th Speech Prosody Conference @Leiden, The Netherlands

Dear Speech Prosody SIG Members,

 

Professor Barbosa and I are very pleased to announce that the 12th Speech Prosody Conference will take place in Leiden, the Netherlands, July 2-5, 2024, and will be organized by Professors Yiya Chen, Amalia Arvaniti, and Aoju Chen.  (Of the 303 votes cast, 225 were for Leiden, 64 for Shanghai, and 14 indicated no preference.) 

 

Also I'd like to remind everyone that nominations for SProSIG officers for 2022-2024 are being accepted still this week, using the form at http://sprosig.org/about.html, to Professor Keikichi Hirose.  If you are considering nominating someone, including yourself, feel free to contact me or any current officer to discuss what's involved and what help is most needed.

 

Nigel Ward, SProSIG Chair

Professor of Computer Science, University of Texas at El Paso

CCSB 3.0408,  +1-915-747-6827

nigel@utep.edu    https://www.cs.utep.edu/nigel/   

 

 

Back  Top

3-1-5(2024-09-01) Interspeech 2024, Jerusalem, Israel.

 ISCA conference committee has decided Interspeech 2024 will be held in

Jerusalem, Israel from September 1 till September 5

Back  Top

3-1-6ISCA INTERNATIONAL VIRTUAL SEMINARS

 

Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers:  see

 

https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars

ISCA INTERNATIONAL VIRTUAL SEMINARS

A seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However,  presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.

ISCA has set up a pool of speakers prepared to give on-line talks. In this way we can enhance the experience of students working in our field, often in difficult conditions. To find details of the speakers,

  • visit isca-speech.org
  • Click Distinguished Lecturers in the left panel
  • Online Seminars then appears beneath Distinguished Lecturers: click that.

Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials.

If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org

If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org


PS. The online seminar scheme  is now up and running, with 7 speakers so far:

 

Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.



Back  Top

3-1-7Speech Prosody courses

Dear Speech Prosody SIG Members,

We would like to draw your attention to three upcoming short courses from the Luso-Brazilian Association of Speech Sciences:

- Prosody & Rhythm: applications to teaching rhythm,
  Donna Erickson (Haskins), March 16, 19, 23 and 26

- Prosody, variation and contact,
  Barbara Gili Fivela (University of Salento, Italy), April 19, 21, 23, 26 and 28

- Rhythmic analysis of languages: main challenges,
  Marisa Cruz (University of Lisbon), June 2, 3, 4, 7, 8 and 10

For details:
  http://www.letras.ufmg.br/padrao_cms/index.php?web=lbass&lang=2&page=3670&menu=&tipo=1
 
 
 
Plinio Barbosa and Nigel Ward

Back  Top

3-2 ISCA Supported Events
3-2-1(2023-01-07) SLT-CODE Hackathon Announcement , Doha, Qatar

SLT-CODE Hackathon Announcement

 

Have you ever asked yourself how your smartphone recognizes what you say and who you are?

 

Have you ever thought about how machines recognize different languages?

 

If that is your case, join us for a two-day speech and language technology hackathon. We will answer these questions and build fantastic systems with the guidance of top language and speech scientists in a collaborative environment.

 

The two-day speech and language technology hackathon will take place during the IEEE Spoken Language Technology (SLT) Workshop in Doha, Qatar, on January 7th and 8th, 2023. This year's Hackathon will be inspiring, momentous, and fun. The goal is to build a diverse community of people who want to explore and envision how machines understand the world's spoken languages.

 

During the Hackathon, you will be exposed (but not limited) to speech and language toolkits like ESPNet, SpeechBrain, K2/Kaldi, Huggingface, TorchAudio, or commercial APIs like Amazon Lex, etc., and you will be hands-on using this technology.

 

At the end of the Hackathon, every team will share their findings with the rest of the participants. Selected projects will have the opportunity to be presented at the SLT workshop.

 

The Hackathon will be at the Qatar Computing Research Institute (QCRI) in Doha, Qatar (GMT+3). In-person participation is preferred; however, remote participation is possible by joining a team with at least one person being local.

 

More information on how to apply and important dates are available at our website https://slt2022.org/hackathon.php

 

Interested? Apply here: https://forms.gle/a2droYbD4qset8ii9 The deadline for registration is September 30th, 2022.

 

If you have immediate questions, don't hesitate to contact our hackathon chairs directly at hackathon.slt2022@gmail.com.

Back  Top

3-3 Other Events
3-3-1(2022-10-12) French Cross-Domain Dialect Identification (FDI) task @VarDial2022, Gyeongju, South Korea
We are organizing the French Cross-Domain Dialect Identification (FDI) task @VarDial2022.
 

In the 2022 French Dialect Identification (FDI) shared task, participants have to train a model on news samples collected from a set of publication sources and evaluate it on news samples collected from a different set of publication sources. Not only the sources are different, but also the topics. Therefore, participants have to build a model for a cross-domain 4-way classification by dialect task, in which a classification model is required to discriminate between the French (FH), Swiss (CH), Belgian (BE) and Canadian (CA) dialects across different news samples. The corpus is divided into training, validation and test, such that the publication sources and topics are distinct across splits. The training set contains 358,787 samples. The development set is composed of 18,002 samples. Another set of 36,733 samples are kept for the final evaluation.

Important Dates:
- Training set release: May 20, 2022
- Test set release: June 30, 2022
- Submissions due: July 6, 2022

Link: https://sites.google.com/view/vardial-2022/shared-tasks#h.mj5vivaubw8r
 
We invite you to participate!
Have a nice day.
Back  Top

3-3-2(2022-10-17) Cf Posters papers, ISMAR 2022, Singapore

CALL FOR POSTER PAPERS
https://ismar2022.org/call-for-posters/

OVERVIEW
ISMAR 2022, the premier conference for Augmented Reality (AR) and Mixed Reality (MR), will be held on October 17-21, 2022. Note that ISMAR offers three distinct calls for journals, papers, and posters. This call is for submission to the conference poster track. See the ISMAR website for more information.

IMPORTANT DEADLINES
Poster Paper Submission Deadline: June 20th, 2022
Notification: August 15th, 2022
Camera-ready version: August 22nd, 2022
ISMAR is responding to the increasing commercial and research activities related to AR and MR and Virtual Reality (VR) by continuing the expansion of its scope over the past several years. ISMAR 2022 will cover the full range of technologies encompassed by the MR continuum, from interfaces in the real world to fully immersive experiences.

ISMAR invites research contributions that advance AR/VR/MR technologies, collectively referred to as eXtended Reality (XR) technologies, and are relevant to the community.

The poster session is one of the highlights of ISMAR, where the community engages in a discussion about the benefits and challenges of XR in other research and application domains.

SUBMISSION DETAILS
We welcome paper submissions from 2-6 pages, including the list of references. Poster papers will be reviewed on the basis of an extended abstract, which can contain smaller contributions, late breaking developments or in-progress work.

Please note that ISMAR further distinguishes poster papers of 2 pages to be non-archival and poster papers of 3 or more pages to be archival. This means that authors of 2 page poster papers can resubmit longer versions of their accepted work, with additional details, at later ISMAR conferences.

All submissions will be accepted or rejected as poster papers.
All accepted papers will be archived in the IEEE Xplore digital library
At least one of the authors must register and attend the ISMAR 2022 conference to present the poster
The Poster track is co-aligned to the Conference Paper track, which may accept some Conference Paper submissions as posters based on the merit of their contribution. Detailed submission and review guidelines are available on the conference website and the Guidelines section.

Note that All paper submissions must be in English.

ISMAR 2022 Poster Chairs
poster_chairs@ismar2022.org

Back  Top

3-3-3(2022-10-19) CfP GDR CNRS « l’accès à l’information » , Rennes, France

Dans le cadre du GdR CNRS Traitement automatique des langues (GdR TAL), l’IRISA organise une journée scientifique sur le thème de « l’accès à l’information » le 19 octobre 2022 à Rennes. La journée sera organisée autour de plusieurs présentations orales invitées et de présentations posters et de démos (cf. appel ci-dessous).

 

Thèmes

 

La numérisation de la société a censément facilité l’accès aux informations, que ce soit pour le grand public (savoir encyclopédique, actualités…) ou dans des domaines de spécialité (p. ex. littérature scientifique).

Cependant, face à l’avalanche de documents, de sites web, de sources, etc., de nombreuses questions pratiques émergent, qui sont autant de défis scientifiques pour les domaines de la recherche d’information, du traitement de la langue et de la parole :

  • Comment trouver les documents ou passages permettant de répondre à un besoin spécifique d’informations ?

  • Comment extraire des informations spécifiques d’un ensemble de documents, des termes, des entités nommées ?

  • Comment faciliter l’interaction entre un utilisateur et une collection de documents, naviguer en son sein, développer des interfaces de visualisation ?

  • Comment résumer l'information ?
  • Comment exploiter efficacement la multimodalité, traiter des documents multilingues ou multimédias ?

  • Comment interfacer bases de connaissances et systèmes de TAL ?
  • Comment s’assurer de la fiabilité des sources et des informations extraites, comment caractériser leurs biais ?

  • ...

 

Appel à poster et démos

 

Dans le cadre de cette journée, nous invitons les chercheuses et chercheurs, travaillant sur ces thèmes dans un cadre académique ou industriel à présenter (démo ou poster) leurs travaux, même déjà publiés, pour échanger avec des collègues du domaine. Pour cela, il suffit de soumettre un résumé d’une page maximum, et/ou le poster s’il est déjà existant, et/ou l’article décrivant les travaux si déjà publié, en français ou en anglais, https://gdr-tal-rennes.sciencesconf.org .

 

Soumission des résumés/posters/articles : au fil de l’eau et au plus tard 30 septembre 2022

Notification aux auteurs : 1 semaine après réception de la proposition

 

Orateurs invités

 

Il y aura 4 présentations invitées :

  • Du traitement automatique du langage naturel et de l'accès aux données expérimentales - Patrick Paroubek (CNRS, LISN)
  • Extraction d’information et gestion de la connaissance au sein d’une organisation. Comment mener des travaux de Recherche ouverte sur des données « fermées » ? - Géraldine Damnati (Orange Labs)
  • Visual Text Analytics in Data Journalism - Anastasia Bezerianos (Univ. Paris Saclay, LISN)
  • Flagging suspect scientific publications for post-publication reassessments - Cyril Labbé (Univ. Grenoble, LIG)

 

Inscription (gratuite mais obligatoire) et venue

 

La journée se tiendra dans le centre de conférence à l’IRISA - Centre Inria de l'université de Rennes, Campus de Beaulieu, Rennes.

Inscription (gratuite mais obligatoire), programme et informations : https://gdr-tal-rennes.sciencesconf.org

Rappel : Le GDR TAL peut financer une mission pour un chercheur ou enseignant chercheur par équipe du GDR. La demande est à effectuer par le responsable de l’équipe au bureau du GDR. La liste des équipes éligibles est sur le site du GDR TAL ; cf. https://gdr-tal.ls2n.fr/reseau-des-doctorants/ 

De plus, pour les jeunes chercheuses et chercheurs venant présenter leurs travaux, des aides pour venir à cette journée peuvent également être sollicitées auprès des organisateurs. Contactez vincent.claveau@irisa.fr 

Back  Top

3-3-4(2022-11-07) 24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru (Bangalore), India

 

CALL FOR LATE-BREAKING RESULTS
 
We invite you to submit your papers to the late-breaking results track of the 24th ACM International Conference on Multimodal Interaction (ICMI 2022), located in Bengaluru (Bangalore), India, November 7-11th, 2022. 

Based on the success of the LBR in the past ICMI 18-21, the ACM International Conference on Multimodal Interaction (ICMI) 2022 continues soliciting submissions for the special venue titled Late-Breaking Results (LBR). The goal of the LBR venue is to provide a way for researchers to share emerging results at the conference. Accepted submissions will be presented in a poster session at the conference, and the extended abstract will be published in the new Adjunct Proceedings (Companion Volume) of the main ICMI Proceedings. Like similar venues at other conferences, the LBR venue is intended to allow sharing of ideas, getting formative feedback on early-stage work, and furthering collaborations among colleagues.
  • Highlights 
    • Submission deadline: August 12th, 2022
    • Notifications: September 9th, 2022
    • Camera-ready deadline: September 16th, 2022
    • Conference Dates: November 7-11, 2022
    • Submission format: Anonymized, short paper (seven-page paper in a single column format, not including references), 
    • following the submission guidelines.
    • Selection process: Peer-Reviewed
    • Presentation format: Participation in the conference poster session
    • Proceedings: Included in Adjunct Proceedings and ACM Digital Library
    • LBR Co-chairs: Fabien Ringeval and Nikita Soni
Late-Breaking Work (LBR) submissions represent work such as preliminary results, provoking and current topics, novel experiences or interactions that may not have been fully validated yet, cutting-edge or emerging work that is still in exploratory stages, smaller-scale studies, or in general, work that has not yet reached a level of maturity expected for the full-length main track papers. However, LBR papers are still expected to bring a contribution to the ICMI community, commensurate with the preliminary, short, and quasi-informal nature of this track.

 

Accepted LBR papers will be presented as posters during the conference. This provides an opportunity for researchers to receive feedback on early-stage work, explore potential collaborations, and otherwise engage in exciting thought-provoking discussions about their work in an informal setting that is significantly less constrained than a paper presentation. The LBR (posters) track also offers those new to the ICMI community a chance to share their preliminary research as they become familiar with this field.
Late-Breaking Results papers appear in the Adjunct Proceedings (Companion Volume) of the ICMI Proceedings. Copyright is retained by the authors, and the material from these papers can be used as the basis for future publications as long as there are “significant” revisions from the original, as per the ACM and ACM SIGCHI policies.
Extended Abstract: An anonymized short paper, seven-page paper in a single column format, not including references. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. The paper should be submitted in PDF format and through the ICMI submission system in the “Late-Breaking Results” track. Due to the tight publication timeline, it is recommended that authors submit a very nearly finalized paper that is as close to camera-ready as possible, as there will be a very short timeframe for preparing the final camera-ready version and no deadline extensions can be granted.
Anonymization: Authors are instructed not to include author information in their submission. In order to help reviewers judge the situation of the LBR to prior work, authors should not remove or anonymize references to their own prior work. Instead, we recommend that authors obscure references to their own prior work by referring to it in the third person during submission. If desired, after acceptance, such references can be changed to first-person.
LBRs will be evaluated to the extent that they are presenting work still in progress, rather than complete work which is under-described in order to fit into the LBR format. The LBR track will undergo an external peer review process. Submissions will be evaluated by a number of factors including (1) the relevance of the work to ICMI, (2) the quality of the submission, and (3) the degree to which it “fits” the LBR track (e.g., in-progress results). More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Authors should clearly justify how the proposed ideas can bring some measurable breakthroughs compared to the state-of-the-art of the field.
Similar rules for registration and attendance will be applied for authors of LBR papers as for regular papers. Further information will be available later on and given on the main page of the website.
For more information and updates on the ICMI 2022 Late-Breaking Results (LBR), visit the LBR page of the main conference website: https://icmi.acm.org/2022/index.php?id=cflbr.
For further questions, contact the LBR co-chairs (Fabien Ringeval and Nikita Soni) at icmi2022-latebreaking-chairs@acm.org
Back  Top

3-3-5(2022-11-07) Doctoral Consortium at ICMI- Call for Contributions

Doctoral Consortium - Call for Contributions 

The goal of the ICMI Doctoral Consortium (DC) is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces. 

Who should apply? 

While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply. 

Why should you attend? 

The DC provides an opportunity to build a social network that includes the cohort of DC students, senior students, recent graduates, and senior mentors. Not only is this an opportunity to get feedback on research directions, it is also an opportunity to learn more about the process and to understand what comes next. We aim to connect you with a mentor who will give specific feedback on your research. We specifically aim to create an informal setting where students feel supported in their professional development. 

Submission Guidelines 

Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials: 

  1. Extended Abstract: Please describe your PhD research plan and progress as a seven-page paper in a single column format. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. Your extended abstract should follow the same outline, details, and format of the ICMI short papers. The submissions will not be anonymous. In particular, it should cover: 
    • The key research questions and motivation of your research; 
    • Background and related work that informs your research; 
    • A statement of hypotheses or a description of the scope of the technical problem; 
    • Your research plan, outlining stages of system development or series of studies; 
    • The research approach and methodology; 
    • Your results to date (if any) and a description of remaining work; 
    • A statement of research contributions to date (if any) and expected contributions of your PhD work; 
  1. Advisor Letter: A one-page letter of nomination from the student's PhD advisor. This letter is not a letter of support. Instead, it should focus on the student's PhD plan and how the Doctoral Consortium event might contribute to the student's PhD training and research. 
  2. CV: A two-page curriculum vitae of the student. 

All materials should be prepared in a single PDF format and submitted through the ICMI submission system. 


Important Dates 

Submission deadline 

July 1, 2022 

Notifications 

July 29, 2022 

Camera-ready 

August 12, 2022 

Review Process 

The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women and other underrepresented groups are especially encouraged to apply.


Attendance 

All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their work as a short talk or as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines will be available after the camera-ready deadline.

Process 

Questions? 

For more information and updates on the ICMI 2022 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website (https://icmi.acm.org/2022/doctoral-consortium/

For further questions, contact the Doctoral Consortium co-chairs: 

  • Theodora Chaspari (chaspari@tamu.edu) 
  • Tanaya Guha (tanaya.guha@glasgow.ac.uk) 
 
Back  Top

3-3-6(2022-11-07) International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives” VASTI 2022 @ICMI 2022

International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives”
VASTI 2022

co-located with the ICMI 2022

https://vasti2022.mobileds.de/

Scope
The workshop is encouraging an interdisciplinary exchange of researchers focussing on multimodal interactions in the wide range of group research aspects, linguistic and acoustic perspectives, as well as dialogue management in relation to speech based systems, e.g. voice  assistants. Regarding the mentioned research communities, the interdisciplinary collaboration between these research communities is currently rather loose. Therefore, the workshop aims on bridging the three research communities based on shared interests and provides a platform for detailed discussions.

Generally, human beings are usually interactive and socially engaged, often communicating in either dyads or groups. During such interactions, each communication partner (human or technical) is providing a variety of information, including general information/content, as well as personal and relational information. These communication aspects are in the focus of group interactions or multi-party interactions. In social sciences, areas such as investigating interpersonal relationships of the group members and the dynamics of group interaction, cohesion, and performance, are observed. These aspects are nowadays also considered in computer sciences and linguistics using automatic analyses. Unfortunately, these communities have started to collaborate only recently. In this sense, the workshop aims to strengthen these collaborations.

However, especially the advent of voice assistants and the increased distributions provide an optimal testbed to combine the three communities and encourage interdisciplinary discussions highlighting contributions from each research perspective. Especially, since at a certain level of development, current voice assistance systems seem to set the expectation of human-like linguistic flexibility and complexity, which is disproportionate to the actual skills of the artificial agent. To enable future technical systems to act as a conversational partner and act naturally in group or dyadic multimodal interactions, it is necessary to combine knowledge and research approaches on the fundamental mechanisms of human speech perception and speech production from a cognitive, psycholinguistic point of view as well as insights from interactional linguistics, discourse analysis and sociolinguistics with phonetics, phonology and prosody in the context of spoken interaction with machines. This should be further combined with aspects of dialogue management and social signal processing to allow a holistic consideration of the users and using groups.


Topics

  • Voice Assistant Technology

  • Multimodal Interactions in Teams

  • Automatic Team Analyses
  • Multi-Party Interaction
  • Linguistics in Voice Assistance
  • Linguistics in Teams
  • Communication in Teams
  • Human Speech Perception
  • Multimodal Perception

Important dates:
Submission deadline: July 28, 2022
Notification of Acceptance: August 12, 2022
Camera ready: August 19, 2022
Workshop date: November 7, 2022

Submissions
Prospective authors are invited to submit full papers (8 pages, 7+1
reference) and short papers (5 pages, 4+1 reference) following the ICMI
2022 Latex or Word templates, as specified by ICMI 2021. All submissions
should be anonymous. Accepted papers will be published in the conference
proceedings.

Venue
in conjunction with ICMI 2022 (intended to be onsite)

Organizers
Ronald Böck, University Magdeburg, Germany
Daniel Duran, Leibniz Zentrum für Angewandte Sprachwissenschaft, Germany
Ingo Siegert, University Magdeburg, Germany


Back  Top

3-3-7(2022-11-07) Late-breaking results @24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru, India
CALL FOR LATE-BREAKING RESULTS
 
We invite you to submit your papers to the late-breaking results track of the 24th ACM International Conference on Multimodal Interaction (ICMI 2022), located in Bengaluru (Bangalore), India, November 7-11th, 2022. 

Based on the success of the LBR in the past ICMI 18-21, the ACM International Conference on Multimodal Interaction (ICMI) 2022 continues soliciting submissions for the special venue titled Late-Breaking Results (LBR). The goal of the LBR venue is to provide a way for researchers to share emerging results at the conference. Accepted submissions will be presented in a poster session at the conference, and the extended abstract will be published in the new Adjunct Proceedings (Companion Volume) of the main ICMI Proceedings. Like similar venues at other conferences, the LBR venue is intended to allow sharing of ideas, getting formative feedback on early-stage work, and furthering collaborations among colleagues.
  • Highlights 
    • Submission deadline: August 12th, 2022
    • Notifications: September 9th, 2022
    • Camera-ready deadline: September 16th, 2022
    • Conference Dates: November 7-11, 2022
    • Submission format: Anonymized, short paper (seven-page paper in a single column format, not including references), 
    • following the submission guidelines.
    • Selection process: Peer-Reviewed
    • Presentation format: Participation in the conference poster session
    • Proceedings: Included in Adjunct Proceedings and ACM Digital Library
    • LBR Co-chairs: Fabien Ringeval and Nikita Soni
Late-Breaking Work (LBR) submissions represent work such as preliminary results, provoking and current topics, novel experiences or interactions that may not have been fully validated yet, cutting-edge or emerging work that is still in exploratory stages, smaller-scale studies, or in general, work that has not yet reached a level of maturity expected for the full-length main track papers. However, LBR papers are still expected to bring a contribution to the ICMI community, commensurate with the preliminary, short, and quasi-informal nature of this track.

 

Accepted LBR papers will be presented as posters during the conference. This provides an opportunity for researchers to receive feedback on early-stage work, explore potential collaborations, and otherwise engage in exciting thought-provoking discussions about their work in an informal setting that is significantly less constrained than a paper presentation. The LBR (posters) track also offers those new to the ICMI community a chance to share their preliminary research as they become familiar with this field.
Late-Breaking Results papers appear in the Adjunct Proceedings (Companion Volume) of the ICMI Proceedings. Copyright is retained by the authors, and the material from these papers can be used as the basis for future publications as long as there are “significant” revisions from the original, as per the ACM and ACM SIGCHI policies.
Extended Abstract: An anonymized short paper, seven-page paper in a single column format, not including references. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. The paper should be submitted in PDF format and through the ICMI submission system in the “Late-Breaking Results” track. Due to the tight publication timeline, it is recommended that authors submit a very nearly finalized paper that is as close to camera-ready as possible, as there will be a very short timeframe for preparing the final camera-ready version and no deadline extensions can be granted.
Anonymization: Authors are instructed not to include author information in their submission. In order to help reviewers judge the situation of the LBR to prior work, authors should not remove or anonymize references to their own prior work. Instead, we recommend that authors obscure references to their own prior work by referring to it in the third person during submission. If desired, after acceptance, such references can be changed to first-person.
LBRs will be evaluated to the extent that they are presenting work still in progress, rather than complete work which is under-described in order to fit into the LBR format. The LBR track will undergo an external peer review process. Submissions will be evaluated by a number of factors including (1) the relevance of the work to ICMI, (2) the quality of the submission, and (3) the degree to which it “fits” the LBR track (e.g., in-progress results). More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Authors should clearly justify how the proposed ideas can bring some measurable breakthroughs compared to the state-of-the-art of the field.
Similar rules for registration and attendance will be applied for authors of LBR papers as for regular papers. Further information will be available later on and given on the main page of the website.
For more information and updates on the ICMI 2022 Late-Breaking Results (LBR), visit the LBR page of the main conference website: https://icmi.acm.org/2022/index.php?id=cflbr.
For further questions, contact the LBR co-chairs (Fabien Ringeval and Nikita Soni) at icmi2022-latebreaking-chairs@acm.org

Back  Top

3-3-8(2022-11-14) IberSPEECH 2022, Grenada, Spain

IberSPEECH’2022 will be held in Granada (Spain), from 14 to 16 November 2022. The IberSPEECH event –the sixth of its kind using this name– brings together the XII Jornadas en Tecnologías del Habla and the VIII Iberian SLTech Workshop events.

Following with the tradition of previous editions, IberSPEECH’2022 will be a three-day event, planned to promote interaction and discussion. There will be a wide variety of activities: technical papers presentations, keynote lectures, presentation of projects, laboratories activities, recent PhD thesis, entrepreneurship & discussion panels, and awards to the best thesis and papers.

You can find all the information of this first call for papers at http://iberspeech2022.ugr.es/. More details will be available soon on this website.

 

Important Dates

Regular Papers

Submission opens: June 17th, 2022
Submission abstract deadline: July 8th, 2022
Submission full paper deadline: July 15th, 2022
Paper notifications sent: September 16th, 2022
Camera-ready paper due: September 25th, 2022

Special Sessions (Projects, Demos, PhD Theses & Entrepreneurship)

Proposals: October 7th, 2022
Full-Paper: October 14th, 2022

Albayzin Evaluations 2022

Registration deadline: September 4th, 2022.
Release of the evaluation data: September 5th, 2022.
Submission deadline (including system summary): October 16th, 2022.
Results distribution to the participants: October 24th, 2022.
Paper submission deadline: October 30th, 2022.
Iberspeech 2022 Albayzin Evaluations special session in Granada: November 15th, 2022.

Conference IBERSPEECH’2022

Registration opens: September 16th, 2022
Early Registration ends: October 18th, 2022
Conference Starts: Monday, November 14th, 2022
Conference Ends: Wednesday, November 16th, 2022

At least one author of each accepted paper must complete a full early registration for the conference.

 

Topics

The topics of interest regarding processing Iberian languages include, but are not limited to:

1. Speech technology and applications

1. Spoken language generation and synthesis

2. Speech and speaker recognition

3. Speaker diarization

4. Speech enhancement

5. Speech processing and acoustic event detection

6. Spoken language understanding

7. Spoken language interfaces and dialogue systems

8. Systems for information retrieval and information extraction from speech

9. Systems for speech translation

10. Applications for aged and handicapped persons

11. Applications for learning and education

12. Emotions recognition and synthesis

13. Language and dialect identification

14. Applications for learning and education

15. Speech technology and applications: other topics

 

2. Human speech production, perception, and communication

1. Linguistic, mathematical, and psychological models of language

2. Phonetics, phonology, and morphology

3. Pragmatics, discourse, semantics, syntax, and lexicon

4. Paralinguistic and nonlinguistic cues (e.g. emotion and expression)

5. Human speech production, perception, and communication: other topics

 

3. Natural language processing (NLP) and applications

1. Natural language generation and understanding

2. Retrieval and categorization of natural language documents

3. Summarization mono and multi-document

4. Extraction and annotation of entities, relations, and properties

5. Creation and processing of ontologies and vocabularies

6. Machine learning for natural language processing

7. Shallow and deep semantic analysis: textual entailment, anaphora resolution, paraphrasing

8. Multi-lingual processing for information retrieval and extraction

9. Natural language processing for information retrieval and extraction

10. Natural language processing (NLP) and applications: other topics

 

4. Speech, Language and Multimodality

1. Multimodal Interaction

2. Sign Language

3. Handwriting recognition

4. Speech, Language and Multimodality: other topics

 

5. Resources, standardization, and evaluation

1. Spoken language resources, annotation, and tools

2. Spoken language evaluation and standardization

3. NLP resources, annotation, tools

4. NLP evaluation and standarization

5. Multimodal resources, annotation and tools

6. Multimodal evaluation and standardization

7. Resources, standardization, and evaluation: other topics

 

Paper Submission

Regular Papers must be written in English and submission will be online. Papers must be submitted in PDF following the Interspeech 2022 format. Paper length is 4 pages and one additional page for references. There is no minimum length requirement for papers of the special sessions project review and demos.

Upon acceptance, at least one author will be required to register (full & early) and present the paper at the conference.

 

--
Back  Top

3-3-9(2022-11-14)) CfP SPECOM 2022, Gurugram, India (updated)

********************************************************************

SPECOM-2022 – CALL FOR PAPERS

********************************************************************

 The conference is relocated in India. 

 

********************************************************************

 

SPECOM-2022 – FINAL CALL FOR PAPERS

 

********************************************************************

 

 

 

24th International Conference on Speech and Computer (SPECOM-2022)

 

November 14-16, 2022, KIIT Campus, Gurugram, India

 

Web: www.specom.co.in

 

 

 

ORGANIZER

 

The conference is organized by KIIT College of Engineering as a hybrid event both in Gurugram/New Delhi, India and online.

 

 

 

CONFERENCE TOPICS

 

SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, and human-computer interaction:

 

Affective computing

 

Audio-visual speech processing

 

Corpus linguistics

 

Computational paralinguistics

 

Deep learning for audio processing

 

Feature extraction

 

Forensic speech investigations

 

Human-machine interaction

 

Language identification

 

Multichannel signal processing

 

Multimedia processing

 

Multimodal analysis and synthesis

 

Sign language processing

 

Speaker recognition

 

Speech and language resources

 

Speech analytics and audio mining

 

Speech and voice disorders

 

Speech-based applications

 

Speech driving systems in robotics

 

Speech enhancement

 

Speech perception

 

Speech recognition and understanding

 

Speech synthesis

 

Speech translation systems

 

Spoken dialogue systems

 

Spoken language processing

 

Text mining and sentiment analysis

 

Virtual and augmented reality

 

Voice assistants

 

 

 

OFFICIAL LANGUAGE

 

The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.

 

 

 

FORMAT OF THE CONFERENCE

 

The conference program will include presentations of invited talks, oral presentations, and poster/demonstration sessions.

 

 

 

 

 

 

 

SUBMISSION OF PAPERS

 

Authors are invited to submit full papers of 8-14 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2022

 

 

 

PROCEEDINGS

 

SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.

 

 

 

IMPORTANT DATES (extended!)

 

August 16, 2022 .................. Submission of full papers

 

September 13, 2022 ........... Notification of acceptance

 

September 20, 2022 ........... Camera-ready papers

 

September 27, 2022 ........... Early registration

 

November 14-16, 2022 .......Conference dates

 

 

 

GENERAL CHAIR/CO-CHAIR

 

Shyam S Agrawal - KIIT, Gurugram

 

Amita Dev - IGDTUW, Delhi

 

 

 

TECHNICAL CHAIR/CO-CHAIRS

 

S.R. Mahadeva Prasanna - IIT Dharwad

 

Alexey Karpov - SPC RAS

 

Rodmonga Potapova - MSLU

 

K. Samudravijaya - KL University

 

 

 

CONTACTS

 

All correspondence regarding the conference should be addressed to SPECOM 2022 Secretariat

 

E-mail: specomkiit@kiitworld.in

 

Web: www.specom.co.in

 

 

 

 



 

 

 


Back  Top

3-3-10(2022-11-30) Third workshop on Resources for African Indigenous Language (RAIL), Potchefstroom, South Africa

Final call for papers

Third workshop on Resources for African Indigenous Language (RAIL)
https://bit.ly/rail2022


The South African Centre for Digital Language Resources (SADiLaR) is
organising the 3rd RAIL workshop in the field of Resources for African
Indigenous Languages. This workshop aims to bring together researchers
who are interested in showcasing their research and thereby boosting
the field of African indigenous languages. This provides an overview of
the current state-of-the-art and emphasizes availability of African
indigenous language resources, including both data and tools.
Additionally, it will allow for information sharing among researchers
interested in African indigenous languages and also start discussions
on improving the quality and availability of the resources.  Many
African indigenous languages currently have no or very limited
resources available and, additionally, they are often structurally
quite different from more well-resourced languages, requiring the
development and use of specialized techniques.  By bringing together
researchers from different fields (e.g., (computational) linguistics,
sociolinguistics, language technology) to discuss the development of
language resources for African indigenous languages, we hope to boost
research in this field.

The RAIL workshop is an interdisciplinary platform for researchers
working on resources (data collections, tools, etc.) specifically
targeted towards African indigenous languages.  It aims to create the
conditions for the emergence of a scientific community of practice that
focuses on data, as well as tools, specifically designed for or applied
to indigenous languages found in Africa.

Suggested topics include the following:
* Digital representations of linguistic structures
* Descriptions of corpora or other data sets of African indigenous
languages
* Building resources for (under resourced) African indigenous languages
* Developing and using African indigenous languages in the digital age
* Effectiveness of digital technologies for the development of African
indigenous languages
* Revealing unknown or unpublished existing resources for African
indigenous languages
* Developing desired resources for African indigenous languages
* Improving quality, availability and accessibility of African
indigenous language resources


The 3rd RAIL workshop 2022 will be co-located with the 10th Southern
African Microlinguistics Workshop (
https://sites.google.com/nwulettere.co.za/samwop-10/home). This will be
an in-person event located in Potchefstroom, South Africa. Registration
will be free.

RAIL 2022 submission requirements:
* RAIL asks for full papers from 4 pages to 8 pages (plus more pages
for references if needed), which must strictly follow the Journal of
the Digital Humanities Association of Southern Africa style guide (
https://upjournals.up.ac.za/index.php/dhasa/libraryFiles/downloadPublic/30
).
* Accepted submissions will be published in JDHASA, the Journal of the
Digital Humanities Association of Southern Africa (
https://upjournals.up.ac.za/index.php/dhasa/).
* Papers will be double blind peer-reviewed and must be submitted
through EasyChair (https://easychair.org/my/conference?conf=rail2022).

Important dates
Submission deadline: 28 August 2022
Date of notification: 30 September 2022
Camera ready copy deadline: 23 October 2022
RAIL: 30 November 2022, North-West University - Potchefstroom
SAMWOP: 1 – 3 December 2022, North-West University - Potchefstroom


Organising Committee
Jessica Mabaso
Rooweither Mabuya
Muzi Matfunjwa
Mmasibidi Setaka
Menno van Zaanen

South African Centre for Digital Language Resources (SADiLaR), South
Africa

Back  Top

3-3-11(2022-12-13) CfP 18th Australasian International Conference on Speech Science and Technology (SST2022), Canberra, Australia
SST2022: CALL FOR PAPERS

The Australasian Speech Science and Technology Association is pleased to call for papers for the 18th Australasian International Conference on Speech Science and Technology (SST2022). SST is an international interdisciplinary conference designed to foster collaboration among speech scientists, engineers, psycholinguists, audiologists, linguists, speech/language pathologists and industrial partners.

? Location: Canberra, Australia (remote participation options will also be available)
? Dates: 13-16 December 2022
? Host Institution: Australian National University
? Deadline for tutorial and special session proposals: 8 April 2022
? Deadline for submissions: 17 June 2022
? Notification of acceptance: 31 August 2022
? Deadline for upload of revised submissions: 16 September 2022
? Website: www.sst2022.com

Submissions are invited in all areas of speech science and technology, including:

? Acoustic phonetics
? Analysis of paralinguistics in speech and language
? Applications of speech science and technology
? Audiology
? Computer assisted language learning
? Corpus management and speech tools
? First language acquisition
? Forensic phonetics
? Hearing and hearing impairment
? Languages of Australia and Asia-Pacific (phonetics/phonology)
? Low-resource languages
? Pedagogical technologies for speech
? Second language acquisition
? Sociophonetics
? Speech signal processing, analysis, modelling and enhancement
? Speech pathology
? Speech perception
? Speech production
? Speech prosody, emotional speech, voice quality
? Speech synthesis and speech recognition
? Spoken language processing, translation, information retrieval and summarization
? Speaker and language recognition
? Spoken dialog systems and analysis of conversation
? Voice mechanisms, source-filter interactions

We are inviting two categories of submission: 4-page papers (for oral or poster presentation, and publication in the proceedings), and 1-page detailed abstracts (for poster presentation only). Please follow the author instructions in preparing your submission.

We also invite proposals for tutorials, as 3-hour intensive instructional sessions to be held on the first day of the conference. In addition, we welcome proposals for special sessions, as thematic groupings of papers exploring specific topics or challenges. Interdisciplinary special sessions are particularly encouraged.

For any queries, please contact sst2022conf@gmail.com.
Back  Top

3-3-12(2023-01-04) SIVA workshop @ Waikoloa Beach Marriott Resort, Hawaii, USA.

CALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne UniversitéCALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)52023-01)04)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université

Back  Top

3-3-13(2023-01-04) Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23), Waikoloa, Hawaii
CALL FOR PAPERS: SIVA'23
Workshop on Socially Interactive Human-like Virtual Agents
From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans

Submission (to be openened July, 22 2022):  https://cmt3.research.microsoft.com/SIVA2023
SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/
FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/

OVERVIEW

Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training.

SCOPE

Topics of interest include but are not limited to:

+ Analysis of Multimodal Human-like Behavior
- Analyzing and understanding of human multimodal behavior (speech, gesture, face)
- Creating datasets for the study and modeling of human multimodal behavior
- Coordination and synchronization of human multimodal behavior
- Analysis of style and expressivity in human multimodal behavior
- Cultural variability of social multimodal behavior

+ Modeling and Generation of Multimodal Human-like Behavior
- Multimodal generation of human-like behavior (speech, gesture, face)
- Face and gesture generation driven by text and speech
- Context-aware generation of multimodal human-like behavior
- Modeling of style and expressivity for the generation of multimodal behavior
- Modeling paralinguistic cues for multimodal behavior generation
- Few-shots or zero-shot transfer of style and expressivity
- Slightly-supervised adaptation of multimodal behavior to context

+ Psychology and Cognition of of Multimodal Human-like Behavior
- Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior
- Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face)
- Neuroscience and social cognition of real humans using virtual agents and physical robots

IMPORTANT DATES

Submission Deadline September, 12 2022 
Notification of Acceptance: October, 15 2022 
Camera-ready deadline: October, 31 2022
Workshop: January, 4 or 5 2023

VENUE

The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA.

ADDITIONAL INFORMATION AND SUBMISSION DETAILS

Submissions must be original and not published or submitted elsewhere.  Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website.  All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors.  Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website.

DIVERSITY, EQUALITY, AND INCLUSION

The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee.


ORGANIZING COMMITTEE
🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture)
🌸 Ryo Ishii, NTT Human Informatics Laboratories
🌸 Rachael E. Jack, University of Glasgow
🌸 Louis-Philippe Morency, Carnegie Mellon University
🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université
Back  Top

3-3-14(2023-01-16) Advanced Language Processing School (ALPS), Grenoble, France
SECOND CALL FOR PARTICIPATION
Advanced Language Processing School (ALPS)
January, 16-20 2023
Virtual Event
 
We are opening the registration for the third Advanced Language Processing School (ALPS), co-organized by University Grenoble Alpes and Naver Labs Europe.
 
*Target Audience*
 
This is a winter school covering advanced topics in NLP, and we are primarily targeting doctoral students and advanced (research) masters. A few slots will also be reserved for academics and persons working in research-heavy positions in industry.
 
*Characteristics*
 
Advanced lectures by first class researchers. A (virtual) atmosphere that fosters connections and interaction. A poster session for attendees to present their work, gather feedback and brainstorm future work ideas.
 
*Speakers*
 
The current list of speakers is: Michael Auli (Meta, USA), Kyunghyun Cho (New York University, USA); Yejin Choi (University of Washington and Allen Institute for AI, USA); Dirk Hovy (Bocconi University, Italia); Colin Raffel (University of North Carolina at Chapel Hill, Hugging Face, USA); Lucia Specia (Imperial College, UK), François Yvon (LISN/CNRS, France).  
 
*Application*
 
To apply to this winter school, please follow the instructions at http://alps.imag.fr/index.php/application/ . The deadline for applying is Sept 30th, and we will notify acceptance on October 3rd.
 
*Contact*
 
E-mail: alps@univ-grenoble-alpes.fr
Back  Top

3-3-15(2023-06-04) CfP ICASSP 2023, Rhodes Island, Greece

 

 

 

Announcing the ICASSP 2023 Call for Papers! 

The Call for Papers for ICASSP 2023 is now open! The 48th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held from 4-9 June 2023 in Rhodes Island, Greece. 
 
The flagship conference of the IEEE Signal Processing Society will offer a comprehensive technical program presenting all the latest developments in research and technology for signal processing and its applications. Featuring world-class oral and poster sessions, keynotes, plenaries and perspective talks, exhibitions, demonstrations, tutorials, short courses, and satellite workshops, it is expected to attract leading researchers and global industry figures, providing a great networking opportunity. Moreover, exceptional papers and contributors will be selected and recognized by ICASSP.

Technical Scope

 

We invite submissions of original unpublished technical papers on topics including but not limited to:

  • Applied Signal Processing Systems
  • Audio & Acoustic Signal Processing
  • Biomedical Imaging & Signal Processing
  • Compressive Sensing, Sparse Modeling
  • Computational Imaging
  • Computer Vision 
  • Deep Learning/Machine Learning for Signal Processing 
  • Image, Video & Multidimensional Signal Processing 
  • Industrial Signal Processing 
  • Information Forensics & Security 
  • Internet of Things
  • Multimedia Signal Processing
  • Quantum Signal Processing
  • Remote Sensing & Signal Processing
  • Sensor Array & Mulltichanel SP
  • Signal Processing for Big Data
  • Signal Processing for Communication
  • Signal Processing for Cyber Security
  • Signal Processing for Education
  • Signal Processing for Robotics
  • Signal Processing Over Graphs
  • Signal Processing Theory & Methods 
  • Speech & Language Processing

SP Society Journal Paper Presentations

Authors of papers published or accepted in IEEE SPS journals may present their work at ICASSP 2023 at appropriate tracks. These papers will neither be reviewed nor included in the proceedings. In addition, the IEEE Open Journal of Signal Processing (OJSP) will provide a special track for longer submissions with the same processing timeline as ICASSP. Accepted papers will be published in OJSP and presented in the conference but will not be included in the conference proceedings.

 

Open Preview

Conference proceedings will be available in IEEE Xplore, free of charge, to all customers, 30 days prior to the conference start date, through the conference end date.

 

Important Dates

  • Paper Submission Deadline: 19 October 2022
  • Paper Acceptance Notification: 8 February 2023 
  • SPS Journal Papers/Letters Deadline: 8 February 2023
  • Camera Ready Paper Deadline: 6 March 2023 
  • Author Registration Deadline: 20 March 2023 
  • Open Preview Starts: 5 May 2023
Back  Top

3-3-16(2023-06-04) CfSatellite Workshops ICASSP 2023, Rhodes Island, Greece

 

 Call for Satellite Workshops at ICASSP 2023

The organizing committee of ICASSP 2023 invites proposals for Satellite Workshops, aiming to inaugurate such tradition with the goals of enriching the conference program, attracting a wider audience, and enhancing inclusivity for students and professionals.
 
The ICASSP Satellite Workshops will be half or full-day events and will take place the day before or after the main conference technical program at the conference venue. The workshops may include a mix of regular papers, invited presentations, keynotes, and panels, encouraging the participation of attendees in active discussions.
 
Submit your proposals by 9 November 2022. 

Workshop Logistics

 

Organizers of ICASSP 2023 Satellite Workshops will be responsible for the workshop scientific planning and promotion, including the setup of their external website (this will be linked from the main ICASSP 2023 site but not hosted there), running of the paper reviewing process, undertaking all communication with the submitted paper authors, creating and announcing the event schedule, abiding by the Important Dates listed below, and seamlessly communicating with the Workshop Chairs.

 

Please note that specifically for workshops that will appear at IEEE Xplore, the paper submission and reviewing process will be conducted through the ICASSP 2023 paper management system (Microsoft CMT).

 

The ICASSP 2023 organizers will handle workshop registration, allocation of facilities, and distribution of the workshop papers in electronic format. Workshop attendance will be free-of-charge for the main conference registrants, while a reduced registration fee will be charged to workshop-only attendees.

Important Dates

  • Workshop Proposal Submission Deadline: 9 November 2022

  • Workshop Proposal Acceptance Notification: 23 November 2022

  • Workshop Website Online: 7 December 2022

  • Workshop Paper Submission Deadline: 15 February 2023

  • Workshop Paper Acceptance Notification: 14 April 2023

  • Workshop Camera Ready Paper Deadline: 28 April 2023

Back  Top

3-3-17(2023-06-12)) 13th International Conference on Multimedia Retrieval, Thessaloniki, Greece

ICMR2023 – ACM International Conference on Multimedia Retrieval

Back  Top

3-3-18(2023-06-15) JPC (Journées de Phonétique Clinique) 2023, Toulouse, France

JPC (Journées de Phonétique Clinique) 2023

Toulouse du 15 au 17 juin 2023 - Save the date !

Appel à communications à venir !
 
Back  Top

3-3-19(2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA

MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
http://www.mldm.de
 
When    Jul 16, 2023 - Jul 21, 2023
Where    New York, USA
Submission Deadline    Jan 15, 2023
Notification Due    Mar 18, 2023
Final Version Due    Apr 5, 2023
Categories:    machine learning   data mining   pattern recognition   classification
 
Call For Papers
MLDM 2023
18th International Conference on Machine Learning and Data Mining
July 15 - 19, 2023, New York, USA

The Aim of the Conference
The aim of the conference is to bring together researchers from all over the world who deal with machine learning and data mining in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome.

Chair
Petra Perner Institute of Computer Vision and Applied Computer Sciences IBaI, Germany

Program Committee
Piotr Artiemjew University of Warmia and Mazury in Olsztyn, Poland
Sung-Hyuk Cha Pace Universtity, USA
Ming-Ching Chang University of Albany, USA
Mark J. Embrechts Rensselaer Polytechnic Institute and CardioMag Imaging, Inc, USA
Robert Haralick City University of New York, USA
Adam Krzyzak Concordia University, Canada
Chengjun Liu New Jersey Institute of Technology, USA
Krzysztof Pancerz University Rzeszow, Poland
Dan Simovici University of Massachusetts Boston, USA
Agnieszka Wosiak Lodz University of Technology, Poland
more to be annouced...


Topics of the conference

Paper submissions should be related but not limited to any of the following topics:

Association Rules
Audio Mining
Autoamtic Semantic Annotation of Media Content
Bayesian Models and Methods
Capability Indices
Case-Based Reasoning and Associative Memory
case-based reasoning and learning
Classification & Prediction
classification and interpretation of images, text, video
Classification and Model Estimation
Clustering
Cognition and Computer Vision
Conceptional Learning
conceptional learning and clustering
Content-Based Image Retrieval
Control Charts
Decision Trees
Design of Experiment
Desirabilities
Deviation and Novelty Detection
Feature Grouping, Discretization, Selection and Transformation
Feature Learning
Frequent Pattern Mining
https://www.icphs2023.org/, where it is also possible to register for email notifications concerning the congress.

 

Contact: icphs2023@guarant.cz

 

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA