ISCApad #291 |
Thursday, September 08, 2022 by Chris Wellekens |
3-1-1 | (2022-09-18) Call for Mentors for One-to-one Mentoring at Interspeech 2022 Call for Mentors for One-to-one Mentoring at Interspeech 2022
| ||
3-1-2 | (2022-09-18) INTERSPEECH 2022 HUMAN AND HUMANIZING SPEECH TECHNOLOGY, Incheon Songdo Convensia, Korea. INTERSPEECH 2022 INTERSPEECH 2022 will be held in Incheon, Korea on September 18-22, 2022. INTERSPEECH is the world's largest and most comprehensive conference on the science and technology of spoken language processing. INTERSPEECH conferences emphasize interdisciplinary approaches addressing all aspects of speech science and technology, ranging from basic theories to advanced applications. The theme of INTERSPEECH 2022 is 'Human and Humanizing Speech Technology'. Over the history of mankind, human's ability to formulate thoughts and complex feelings, and communication has been evolved by basically talking to the people around them. However, as machines become ever more present in our daily lives, so grows our need for realizing natural interaction with them. With the rapid progress in AI on speech and language applications over the 5G network services provided worldwide, we are at the onset of building our vision of creating a full ecosystem of natural speech and language technology applications. The conference theme of 'Human and Humanizing Speech Technology' formulates the vision of the scientific and industrial community to commit endeavors to continue the effort in speech science toward humanizing the spoken language technology, so the impact becomes a game changer that goes beyond the current state-of-the-art we serve, and ultimately the society as a whole experie nce the benefits. The paper submission has been opened on January 21, 2022. Submit your papers by March 21, 2022 to be considered! Call for Papers INTERSPEECH 2022 seeks original, novel and innovative papers covering all aspects of speech science and technology, ranging from basic theories to advanced applications. Papers addressing scientific area topics related to the conference theme should be submitted electronically through the START V2 system. The working language of the conference is English, and papers must be written in English. The paper length should be up to four pages in two columns. An additional page can be used for references only. Paper submissions must conform to the format defined in the paper preparation guidelines as instructed in the author's kit on the conference webpage. Submissions may also be accompanied by additional files such as multimedia files. Authors must declare that their contributions are original and have not been submitted elsewhere for publication. Contributed papers will be in the rigorous peer-review process. Each paper will be evaluated on the basis of these criteria; novelty and origin ality, technical correctness, clarity of presentation, key strength, and quality of references. Scientific Areas and Topics INTERSPEECH 2022 embraces a broad range of science and technology in speech, language and communication areas, including the following topics, but not limited to:
Technical Program Committee Chairs Kyogu Lee | Seoul National University, Korea (kglee@snu.ac.kr) Important Dates for INTERSPEECH 2022 Papers Website: www.interspeech2022.org / E-mail: info@interspeech2022.org
| ||
3-1-3 | (2023-08-20) Interspeech 2023, Dublin, Ireland , ISCA has reached the decision to hold INTERSPEECH-2023 in Dublin, Ireland (Aug. 20-24, 2023)
| ||
3-1-4 | (2024-07-02) 12th Speech Prosody Conference @Leiden, The Netherlands Dear Speech Prosody SIG Members,
Professor Barbosa and I are very pleased to announce that the 12th Speech Prosody Conference will take place in Leiden, the Netherlands, July 2-5, 2024, and will be organized by Professors Yiya Chen, Amalia Arvaniti, and Aoju Chen. (Of the 303 votes cast, 225 were for Leiden, 64 for Shanghai, and 14 indicated no preference.)
Also I'd like to remind everyone that nominations for SProSIG officers for 2022-2024 are being accepted still this week, using the form at http://sprosig.org/about.html, to Professor Keikichi Hirose. If you are considering nominating someone, including yourself, feel free to contact me or any current officer to discuss what's involved and what help is most needed.
Nigel Ward, SProSIG Chair Professor of Computer Science, University of Texas at El Paso CCSB 3.0408, +1-915-747-6827 nigel@utep.edu https://www.cs.utep.edu/nigel/
| ||
3-1-5 | (2024-09-01) Interspeech 2024, Jerusalem, Israel. ISCA conference committee has decided Interspeech 2024 will be held in Jerusalem, Israel from September 1 till September 5
| ||
3-1-6 | 9th Students meet Experts at Interspeech 2022 9th Students meet Experts at Interspeech 2022 Date: Wednesday, September 21th, from 16h to 17h KST (Korean time) Location: Incheon, South Korea and online (tentative) Panel of Experts: To be confirmed
After successful editions in Lyon (2013), Singapore (2014), San Francisco (2016), Stockholm (2017), Hyderabad (2018), Graz (2019), virtually in both Shanghai (2020) and Brno (2021), we are excited to announce that the Students Meet Experts event is now coming to Interspeech 2022 in Incheon, South Korea. We will have a panel discussion with experts from academia and industry in a hybrid format.
We encourage you to submit questions before the event. A selection of the submitted questions will be answered from the panel of experts. Please keep in mind that the experts and the audience are coming from different fields, so field specific and technical questions are less likely to be presented to the panel. To submit your questions and/or register for this event, we ask you to fill in the form, which will be announced soon.
Registration and question form:https://forms.gle/yuQh4Zq1wxkAoRHb8
Contact: Thomas Rolland: sac@isca-speech.org
| ||
3-1-7 | ISCA INTERNATIONAL VIRTUAL SEMINARS Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers: see https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars ISCA INTERNATIONAL VIRTUAL SEMINARSA seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However, presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.
Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials. If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org PS. The online seminar scheme is now up and running, with 7 speakers so far:
Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.
| ||
3-1-8 | Speech Prosody courses Dear Speech Prosody SIG Members,
|
3-2-1 | (2023-01-07) SLT-CODE Hackathon Announcement , Doha, Qatar SLT-CODE Hackathon Announcement
Have you ever asked yourself how your smartphone recognizes what you say and who you are?
Have you ever thought about how machines recognize different languages?
If that is your case, join us for a two-day speech and language technology hackathon. We will answer these questions and build fantastic systems with the guidance of top language and speech scientists in a collaborative environment.
The two-day speech and language technology hackathon will take place during the IEEE Spoken Language Technology (SLT) Workshop in Doha, Qatar, on January 7th and 8th, 2023. This year's Hackathon will be inspiring, momentous, and fun. The goal is to build a diverse community of people who want to explore and envision how machines understand the world's spoken languages.
During the Hackathon, you will be exposed (but not limited) to speech and language toolkits like ESPNet, SpeechBrain, K2/Kaldi, Huggingface, TorchAudio, or commercial APIs like Amazon Lex, etc., and you will be hands-on using this technology.
At the end of the Hackathon, every team will share their findings with the rest of the participants. Selected projects will have the opportunity to be presented at the SLT workshop.
The Hackathon will be at the Qatar Computing Research Institute (QCRI) in Doha, Qatar (GMT+3). In-person participation is preferred; however, remote participation is possible by joining a team with at least one person being local.
More information on how to apply and important dates are available at our website https://slt2022.org/hackathon.php.
Interested? Apply here: https://forms.gle/a2droYbD4qset8ii9 The deadline for registration is September 30th, 2022.
If you have immediate questions, don't hesitate to contact our hackathon chairs directly at hackathon.slt2022@gmail.com.
| ||
3-2-2 | (2023-01-09 )IEEE SLT, Doha, Qatar Languages of the World, Doha, Qatar 9th to 12th January, 2023
CALL FOR PAPERS IS ALREADY OPENThe2022 IEEE Spoken Language Technology Workshop(SLT2022) will be held on 9th - 12th January 2023 at Doha, Qatar. SLT 2022 will be the first speech conference to have visited the Middle East and the first speech conference to be held in an Arabic speaking nation. The SLT Workshop is a flagship event of IEEE Speech and Language Processing Technical Committee. The workshop is held every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding.
MoreInformation: https://slt2022.org
We invite papers in all areas of spoken language processing, with emphasis placed on the following topics:Automatic speech recognition Conversational/multispeaker ASR Far-field speech processing Speaker and language recognition Spoken language understanding Spoken dialog systems Low resource/multilingual Language processing Spokendocumentretrieval Speech-to-speech translation Text-to-speech systems Speech summarization New applications of automatic speech recognition Audio-visual/Multimodal speech processing Emotion recognition from speech
SLT2022 will also feature Speech Hackathon to provide a hands-on element for students and young professionals.
Important dates
Paper submission: July15,2022 Paper Update: July21,2022 Rebuttal period: August26-31,2022 Paper Notification: Sept30,2022 Early Registration Period:Oct,2022 Speech Hackathon: Jan8-9,2023 Arabic Speech Meeting: Jan13,2023
GeneralChairAhmed Ali(QCRI) Bhuvana Ramabhadran(Google) Technical chairsShinji Watanabe(CarnegieMellonUniversity) Mona Diab(FaceBook) Sanjeev Khudanpur(JohnsHopkinsUniversity) Julia Hirschberg(ColumbiaUniversity) Murat Saraclar(BogaziciUniversity) Marc Delcroix(NTTCommunicationScienceLaboratories)
Regional publicity chairs Sebastian Möller(TUBerlin) Tomoki Toda (Nagoya University) Finance chairJan Trmal(JohnsHopkinsUniversity) Juan Rafael Orozco Arroyave(UdeA,Colombia)
Sponsorship chairs Murat Akbacak(Apple) Eman Fituri (QCRI) Jimmy Kunzmann(Amazon) SLTC LiaisonKyu Jeong Han(ASAAP) Publication chairsAlberto Abad Gareta(INESC-ID/IST) Erfan Loweimi(King'sCollegeLondon)
Invited speaker chairAndrew Rosenberg(Google) Nancy F.Chen(Institute for Infocomm Research(I2R)
Challenge&demonstrationchairsImed Zitouni(Google) Jon Barker(University of Sheffield) Seokhwan Kim(Amazon) Peter Bell(University of Edinburgh)
SpeechHackathonOrganizingCommitteeThomas Schaaf(3M|M*Modal) Gianni DiCaro(Carnegie Mellon University-Qatar) Shinji Watanabe - ESPNET (Carnegie Mellon University) Paola Garcia-KALDI/K2(Johns Hopkins University) Mirco Ravanelli - Speech Brain (Université de Montréal) Alessandra Cervone(Amazon) Mus'ab Husaini(QCRI)
AdvisoryboardJim Glass (MIT) Kemal Oflazer (Carnegie Mellon University - Qatar) Helen Meng (The Chinese University of Hong Kong) Haizhou Li (National University of Singapore) Local Arrangements chairsShammur Chowdhury(QCRI) Houda Bouamor(Carnegie Mellon University-Qatar) Student coordinatorBerrak Sisman(Singapore University of Technology and Design)
|
3-3-1 | (2022-09-18) Call for tutorials Interspeech 2022, Incheon, Korea Call for papers: September 18 - 22, 2022
Incheon, South Korea ______________ Automatic speech recognition systems have dramatically improved over the past decade thanks to the advances brought by deep learning and the effort on large-scale data collection. For some groups of people, however, speech technology works less well, maybe because their speech patterns differ significantly from the standard dialect (e.g., because of regional accent), because of intra-group heterogeneity (e.g., speakers of regional African American dialects; second-language learners; and other demographic aspects such as age, gender, or race), or because the speech pattern of each individual in the group exhibits a large variability (e.g., people with severe disabilities). The goal of this special session is (1) to discuss these biases and propose methods for making speech technologies more useful to heterogeneous populations and (2) to increase academic and industry collaborations to reach these goals. Such methods include:
______________ Important Dates: Paper submission deadline: March 21, 2022, 23:59, AoE. Paper update deadline: March 28, 2022, 23:59, AoE. Interspeech conference dates: September 18 to 22, 2022. ______________ Author Guidelines: Papers have to be submitted following the same schedule and procedure as the main conference, and will undergo the same review process. Submit your papers here: www.softconf.com/m/interspeech2022 and select the 'Submission Topic' 14.5 to include your work in this session. ______________ Organizers: Laurent Besacier, Naver Labs Europe, France Keith Burghardt, USC Information Sciences Institute, USA Alice Coucke, Sonos Inc., France Mark Allan Hasegawa-Johnson, University of Illinois, USA Peng Liu, Amazon Alexa, USA Anirudh Mani, Amazon Alexa, USA Mahadeva Prasanna, IIT Dharwad, India Priyankoo Sarmah, IIT Guwahati, India Odette Scharenborg, Delft University of Technology, the Netherlands Tao Zhang, Amazon Alexa, USA --
Alice Coucke
Head of Machine Learning Research | Sonos Voice Experience
| |||||||||
3-3-3 | (2022-09-18) CfP Special session on Trustworthy Speech Processing at Interspeech 22
We're organizing a special session on Trustworthy Speech Processing at Interspeech 22, inviting papers exploring topics from trustworthy machine learning (such as privacy, fairness, bias mitigation, etc.) within the realm of speech processing. Can you please include this CFP in your next newsletter, and forward to any relevant lists if possible?
Best, Organizing team: Anil Ramakrishna, Amazon Inc. Shrikanth Narayanan, University of Southern California Rahul Gupta, Amazon Inc. Isabel Trancoso, University of Lisbon Rita Singh, Carnegie Mellon University
====================================================================== Call for papers: Trustworthy Speech Processing (TSP) Special Session at Interspeech 22 trustworthyspeechprocessing.github.io September 18 - 22, 2022 Incheon, South Korea
Given the ubiquity of Machine Learning (ML) systems and their relevance in daily lives, it is important to ensure private and safe handling of data alongside equity in human experience. These considerations have gained considerable interest in recent times under the realm of Trustworthy ML. Speech processing in particular presents a unique set of challenges, given the rich information carried in linguistic and paralinguistic content including speaker trait, interaction and state characteristics. This special session on Trustworthy Speech Processing (TSP) was created to bring together new and experienced researchers working on trustworthy ML and speech processing.
We invite novel and relevant submissions from both academic and industrial research groups showcasing theoretical and empirical advancements in TSP. Topics of interest cover a variety of papers centered on speech processing, including (but not limited to):
* Differential privacy * Federated learning * Ethics in speech processing * Model interpretability * Quantifying & mitigating bias in speech processing * New datasets, frameworks and benchmarks for TSP * Discovery and defense against emerging privacy attacks * Trustworthy ML in applications of speech processing like ASR
====================================================================== Important Dates: Paper submission deadline: March 21, 2022, 23:59, Anywhere on Earth. Paper update deadline: March 28, 2022, 23:59, Anywhere on Earth. Author notification: June 13, 2022. Interspeech conference dates: September 18 to 22, 2022.
====================================================================== Author Guidelines: Submissions for TSP will follow the same schedule and procedure as the main conference. Submit your papers here: www.softconf.com/m/interspeech2022 (select option #14.13 as the submission topic).
| |||||||||
3-3-4 | (2022-09-18) CfP Spoofing-Aware Speaker Verification Challenge, Incheon, Korea We are thrilled to announce the Spoofing-Aware Speaker Verification Challenge. While spoofing countermeasures, promoted within the sphere of the ASVspoof challenge series, can help to protect reliability in the face of spoofing, they have been developed as independent subsystems for a fixed ASV subsystem. Better performance can be expected when countermeasures and ASV subsystems are both optimised to operate in tandem.
The first Spoofing-Aware Speaker Verification (SASV) 2022 challenge aims to encourage the development of original solutions involving, but not limited to:
- back-end fusion of pre-trained automatic speaker verification and pre-trained audio spoofing countermeasure subsystems;
- integrated spoofing-aware automatic speaker verification systems that have the capacity to reject both non-target and spoofed trials.
We warmly invite the submission of general contributions in this direction. The Interspeech 2022 Spoofing-Aware Automatic Speaker Verification special session also incorporates a challenge ? SASV 2022. Participants are encouraged to evaluate their solutions using the SASV benchmarking framework which comprises a common database, protocol, and evaluation metric. Further details and resources can be found on the SASV challenge website.
Schedule:
-January 19, 2022: Release of the evaluation plan
- March 10, 2022: Results submission - September 18-22, 2022: SASV challenge special session at INTERSPEECH
To participate, please register your interest at https://forms.gle/htoVnog34kvs3as56
For further information, please contact us at sasv.challenge@gmail.com.
We are looking forward to hearing from you.
Kind regards,
The SASV Challenge 2022 Organisers
| |||||||||
3-3-5 | (2022-09-23) 2nd Symposium on Security and Privacy in Speech Communication joint with 2nd Challenge Workshop (INTERSPEECH 2022 satellite event) CALL FOR PAPERS =========================================
========================================= The second edition of the Symposium on Security & Privacy in Speech Communication (SPSC), this year combined with the 2nd VoicePrivacy Challenge workshop, focuses on speech and voice through which we express ourselves. As speech communication can be used to command virtual assistants to transport emotion or to identify oneself, the symposium tries to give answers to the question on how we can strengthen security and privacy for speech representation types in user-centric human/machine interaction? The symposium therefore sees that interdisciplinary exchange is in high demand and aims to bring together researchers and practitioners across multiple disciplines including signal processing, cryptography, security, human-computer interaction, law, and anthropology. The SPSC Symposium addresses interdisciplinary topics. For more details, see https://symposium2022.spsc-sig.org/home/_cfp/CFP_SPSC-Symposium-2022.pdf === Important dates
=== Topics of interest Topics regarding the technical perspective include:
Topics regarding the humanities’ view include:
We welcome contributions on related topics, as well as progress reports, project disseminations, theoretical discussions, and “work in progress”. There is also a dedicated PhD track. In addition, participants from academia, industry, and public institutions, as well as interested students are welcome to attend the conference without having to make their own contribution. All accepted submissions will appear in the conference proceedings published in ISCA Archive. The workshop will take place mainly in person at the Incheon National University (Korea) with additional support of participants willing to join virtually. === Submission Papers intended for the SPSC Symposium should be up to eight pages of text. The length should be chosen appropriately to present the topic to an interdisciplinary community. Paper submissions must conform to the format defined in the paper preparation guidelines and as detailed in the author’s kit. Papers must be submitted via the online paper submission system. The working language of the conference is English, and papers must be written in English. === Reviews At least three single-blind reviews will be provided, and we aim to obtain feedback from interdisciplinary experts for each submission. The review criteria applied to regular papers will be adapted for VoicePrivacy Challenge papers to be more in keeping with systems descriptions and results.
| |||||||||
3-3-6 | (2022-09-23) Voice Privacy Challenge, Incheon, South Korea VoicePrivacy 2022 Challenge
--------------------------------------------------------------------------------------------------------------- Dear colleagues, registration for the VoicePrivacy 2022 Challenge continues! The task is to develop a voice anonymization system for speech data which conceals the speaker’s voice identity while protecting linguistic content, paralinguistic attributes, intelligibility and naturalness. The VoicePrivacy 2022 Challenge Evaluation Plan: https://www.voiceprivacychallenge.org/vp2020/docs/VoicePrivacy_2022_Eval_Plan_v1.0.pdf VoicePrivacy 2022 is the second edition, which will culminate in a joint workshop held in Incheon, Korea in conjunction with INTERSPEECH 2022 and in cooperation with the ISCA Symposium on Security and Privacy in Speech Communication. Registration: Participate | VoicePrivacy 2022 Subscription: Participate | VoicePrivacy 2022
| |||||||||
3-3-7 | (2022-10-10) 5th International Workshop on Multimedia Content Analysis in Sports (MMSports'22) @ ACM Multimedia, Lisbon, Portugal Call for Papers ------------------- 5th International Workshop on Multimedia Content Analysis in Sports (MMSports'22) @ ACM Multimedia, October 10-14, 2022, Lisbon, Portugal
We'd like to invite you to submit your paper proposals for the 5th International Workshop on Multimedia Content Analysis in Sports to be held in Lisbon, Portugal together with ACM Multimedia 2022. The ambition of this workshop is to bring together researchers and practitioners from different disciplines to share ideas on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:
- annotation and indexing in sports - tracking people/ athlete and objects in sports - activity recognition, classification, and evaluation in sports - event detection and indexing in sports - performance assessment in sports - injury analysis and prevention in sports - data driven analysis in sports - graphical augmentation and visualization in sports - automated training assistance in sports - camera pose and motion tracking in sports - brave new ideas / extraordinary multimodal solutions in sports - personal virtual (home) trainers/coaches in sports - datasets in sports
Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of their paper. All papers will undergo the same review process and review period.
Please refer to the workshop website for further information: http://mmsports.multimedia-computing.de/mmsports2022/index.html
IMPORTANT DATES Submission Due: July 4, 2022 Acceptance Notification: July 29, 2022 Camera Ready Submission: August 21, 2022 Workshop Date: TBA; either Oct 10 or Oct 14, 2022
Challenges -------------- This year, MMSports proposes a competition where participants will compete over State-of-the-art problems applied to real-world sport specific data. The competition is made of 4 individual challenges, each of which is sponsored by SportRadar with a US$1,000.00 prize. Each challenge comes with a toolkit describing the task, the dataset and metrics on which participants will be evaluated: The challenges are hosted on EvalAI where participants will submit the prediction of their model on an evaluation set for which labels are kept secret. Leaderboards will display the ranking for each challenge. More information can be found at http://mmsports.multimedia-computing.de/mmsports2022/challenge.html
ACM MMSports’22 Chairs: Thomas Moeslund, Rainer Lienhart and Hideo Saito
| |||||||||
3-3-8 | (2022-10-12) French Cross-Domain Dialect Identification (FDI) task @VarDial2022, Gyeongju, South Korea We are organizing the French Cross-Domain Dialect Identification (FDI) task @VarDial2022.
Contact: raducu.ionescu@gmail.com
In the 2022 French Dialect Identification (FDI) shared task, participants have to train a model on news samples collected from a set of publication sources and evaluate it on news samples collected from a different set of publication sources. Not only the sources are different, but also the topics. Therefore, participants have to build a model for a cross-domain 4-way classification by dialect task, in which a classification model is required to discriminate between the French (FH), Swiss (CH), Belgian (BE) and Canadian (CA) dialects across different news samples. The corpus is divided into training, validation and test, such that the publication sources and topics are distinct across splits. The training set contains 358,787 samples. The development set is composed of 18,002 samples. Another set of 36,733 samples are kept for the final evaluation. Important Dates: - Training set release: May 20, 2022 - Test set release: June 30, 2022 - Submissions due: July 6, 2022 Link: https://sites.google.com/view/vardial-2022/shared-tasks#h.mj5vivaubw8r We invite you to participate!
Have a nice day.
| |||||||||
3-3-9 | (2022-10-17) Cf Posters papers, ISMAR 2022, Singapore CALL FOR POSTER PAPERS
| |||||||||
3-3-10 | (2022-10-19) CfP Journée scientifique 'Accès à l'information', IRISAQ, Rennes,France Dans le cadre du GdR CNRS Traitement automatique des langues (GdR TAL), l’IRISA organise une journée scientifique sur le thème de « l’accès à l’information » le 19 octobre 2022 à Rennes. La journée sera organisée autour de plusieurs présentations orales invitées et de présentations posters et de démos (cf. appel ci-dessous).
Thèmes
La numérisation de la société a censément facilité l’accès aux informations, que ce soit pour le grand public (savoir encyclopédique, actualités…) ou dans des domaines de spécialité (p. ex. littérature scientifique). Cependant, face à l’avalanche de documents, de sites web, de sources, etc., de nombreuses questions pratiques émergent, qui sont autant de défis scientifiques pour les domaines de la recherche d’information, du traitement de la langue et de la parole :
Appel à poster et démos
Dans le cadre de cette journée, nous invitons les chercheuses et chercheurs, travaillant sur ces thèmes dans un cadre académique ou industriel à présenter (démo ou poster) leurs travaux, même déjà publiés, pour échanger avec des collègues du domaine. Pour cela, il suffit de soumettre un résumé d’une page maximum, et/ou le poster s’il est déjà existant, et/ou l’article décrivant les travaux si déjà publié, en français ou en anglais, https://gdr-tal-rennes.sciencesconf.org .
Soumission des résumés/posters/articles : au fil de l’eau et au plus tard 30 septembre 2022 Notification aux auteurs : 1 semaine après réception de la proposition
Orateurs invités
Il y aura 4 présentations invitées, bientôt annoncées…
Inscription et venue
La journée se tiendra dans le centre de conférence à l’IRISA - Centre Inria de l'université de Rennes, Campus de Beaulieu, Rennes. Inscription (gratuite mais obligatoire), programme et informations : https://gdr-tal-rennes.sciencesconf.org Rappel : Le GDR TAL finance une mission pour un chercheur ou enseignant chercheur par équipe du GDR. La demande est à effectuer par le responsable de l’équipe au bureau du GDR. La liste des équipes éligibles est sur le site du GDR TAL ; cf. https://gdr-tal.ls2n.fr/reseau-des-doctorants/ De plus, pour les jeunes chercheuses et chercheurs venant présenter leurs travaux, des aides pour venir à cette journée peuvent également être sollicitées auprès des organisateurs. Contactez vincent.claveau@irisa.fr
| |||||||||
3-3-11 | (2022-11-07) 24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru (Bangalore), India
CALL FOR LATE-BREAKING RESULTS
We invite you to submit your papers to the late-breaking results track of the 24th ACM International Conference on Multimodal Interaction (ICMI 2022), located in Bengaluru (Bangalore), India, November 7-11th, 2022.
Based on the success of the LBR in the past ICMI 18-21, the ACM International Conference on Multimodal Interaction (ICMI) 2022 continues soliciting submissions for the special venue titled Late-Breaking Results (LBR). The goal of the LBR venue is to provide a way for researchers to share emerging results at the conference. Accepted submissions will be presented in a poster session at the conference, and the extended abstract will be published in the new Adjunct Proceedings (Companion Volume) of the main ICMI Proceedings. Like similar venues at other conferences, the LBR venue is intended to allow sharing of ideas, getting formative feedback on early-stage work, and furthering collaborations among colleagues.
Late-Breaking Work (LBR) submissions represent work such as preliminary results, provoking and current topics, novel experiences or interactions that may not have been fully validated yet, cutting-edge or emerging work that is still in exploratory stages, smaller-scale studies, or in general, work that has not yet reached a level of maturity expected for the full-length main track papers. However, LBR papers are still expected to bring a contribution to the ICMI community, commensurate with the preliminary, short, and quasi-informal nature of this track.
Accepted LBR papers will be presented as posters during the conference. This provides an opportunity for researchers to receive feedback on early-stage work, explore potential collaborations, and otherwise engage in exciting thought-provoking discussions about their work in an informal setting that is significantly less constrained than a paper presentation. The LBR (posters) track also offers those new to the ICMI community a chance to share their preliminary research as they become familiar with this field.
Late-Breaking Results papers appear in the Adjunct Proceedings (Companion Volume) of the ICMI Proceedings. Copyright is retained by the authors, and the material from these papers can be used as the basis for future publications as long as there are “significant” revisions from the original, as per the ACM and ACM SIGCHI policies.
Extended Abstract: An anonymized short paper, seven-page paper in a single column format, not including references. The instructions and templates are on the following link: https://www.acm.org/publications/taps/word-template-workflow. The paper should be submitted in PDF format and through the ICMI submission system in the “Late-Breaking Results” track. Due to the tight publication timeline, it is recommended that authors submit a very nearly finalized paper that is as close to camera-ready as possible, as there will be a very short timeframe for preparing the final camera-ready version and no deadline extensions can be granted.
Anonymization: Authors are instructed not to include author information in their submission. In order to help reviewers judge the situation of the LBR to prior work, authors should not remove or anonymize references to their own prior work. Instead, we recommend that authors obscure references to their own prior work by referring to it in the third person during submission. If desired, after acceptance, such references can be changed to first-person.
LBRs will be evaluated to the extent that they are presenting work still in progress, rather than complete work which is under-described in order to fit into the LBR format. The LBR track will undergo an external peer review process. Submissions will be evaluated by a number of factors including (1) the relevance of the work to ICMI, (2) the quality of the submission, and (3) the degree to which it “fits” the LBR track (e.g., in-progress results). More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Authors should clearly justify how the proposed ideas can bring some measurable breakthroughs compared to the state-of-the-art of the field.
Similar rules for registration and attendance will be applied for authors of LBR papers as for regular papers. Further information will be available later on and given on the main page of the website.
For more information and updates on the ICMI 2022 Late-Breaking Results (LBR), visit the LBR page of the main conference website: https://icmi.acm.org/2022/index.php?id=cflbr.
For further questions, contact the LBR co-chairs (Fabien Ringeval and Nikita Soni) at icmi2022-latebreaking-chairs@acm.org
| |||||||||
3-3-12 | (2022-11-07) Doctoral Consortium at ICMI- Call for Contributions Doctoral Consortium - Call for Contributions The goal of the ICMI Doctoral Consortium (DC) is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces. Who should apply? While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply. Why should you attend? The DC provides an opportunity to build a social network that includes the cohort of DC students, senior students, recent graduates, and senior mentors. Not only is this an opportunity to get feedback on research directions, it is also an opportunity to learn more about the process and to understand what comes next. We aim to connect you with a mentor who will give specific feedback on your research. We specifically aim to create an informal setting where students feel supported in their professional development. Submission Guidelines Graduate students pursuing a PhD degree in a field related to designing multimodal interfaces should submit the following materials:
All materials should be prepared in a single PDF format and submitted through the ICMI submission system. Important Dates
Review Process The Doctoral Consortium will follow a review process in which submissions will be evaluated by a number of factors including (1) the quality of the submission, (2) the expected benefits of the consortium for the student's PhD research, and (3) the student's contribution to the diversity of topics, backgrounds, and institutions, in order of importance. More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Finally, we hope to achieve a diversity of research topics, disciplinary backgrounds, methodological approaches, and home institutions in this year's Doctoral Consortium cohort. We do not expect more than two students to be invited from each institution to represent a diverse sample. Women and other underrepresented groups are especially encouraged to apply. Attendance All authors of accepted submissions are expected to attend the Doctoral Consortium and the main conference poster session. The attendees will present their work as a short talk or as a poster at the conference poster session. A detailed program for the Consortium and the participation guidelines will be available after the camera-ready deadline. Process
Questions? For more information and updates on the ICMI 2022 Doctoral Consortium, visit the Doctoral Consortium page of the main conference website (https://icmi.acm.org/2022/doctoral-consortium/) For further questions, contact the Doctoral Consortium co-chairs:
| |||||||||
3-3-13 | (2022-11-07) International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives” VASTI 2022 @ICMI 2022 International Workshop on “Voice Assistant Systems in Team Interactions ‒ Implications, Best Practice, Applications, and Future Perspectives”
Important dates:
| |||||||||
3-3-14 | (2022-11-07) Late-breaking results @24th ACM International Conference on Multimodal Interaction (ICMI 2022), Bengaluru, India
| |||||||||
3-3-15 | (2022-11-14) IberSPEECH 2022, Grenada, Spain
| |||||||||
3-3-16 | (2022-11-14)) CfP SPECOM 2022, Gurugram, India (updated) ******************************************************************** SPECOM-2022 – CALL FOR PAPERS ******************************************************************** The conference is relocated in India.
********************************************************************
SPECOM-2022 – FINAL CALL FOR PAPERS
********************************************************************
24th International Conference on Speech and Computer (SPECOM-2022)
November 14-16, 2022, KIIT Campus, Gurugram, India
Web: www.specom.co.in
ORGANIZER
The conference is organized by KIIT College of Engineering as a hybrid event both in Gurugram/New Delhi, India and online.
CONFERENCE TOPICS
SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, and human-computer interaction:
Affective computing
Audio-visual speech processing
Corpus linguistics
Computational paralinguistics
Deep learning for audio processing
Feature extraction
Forensic speech investigations
Human-machine interaction
Language identification
Multichannel signal processing
Multimedia processing
Multimodal analysis and synthesis
Sign language processing
Speaker recognition
Speech and language resources
Speech analytics and audio mining
Speech and voice disorders
Speech-based applications
Speech driving systems in robotics
Speech enhancement
Speech perception
Speech recognition and understanding
Speech synthesis
Speech translation systems
Spoken dialogue systems
Spoken language processing
Text mining and sentiment analysis
Virtual and augmented reality
Voice assistants
OFFICIAL LANGUAGE
The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.
FORMAT OF THE CONFERENCE
The conference program will include presentations of invited talks, oral presentations, and poster/demonstration sessions.
SUBMISSION OF PAPERS
Authors are invited to submit full papers of 8-14 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are asked to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2022
PROCEEDINGS
SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.
IMPORTANT DATES (extended!)
August 16, 2022 .................. Submission of full papers
September 13, 2022 ........... Notification of acceptance
September 20, 2022 ........... Camera-ready papers
September 27, 2022 ........... Early registration
November 14-16, 2022 .......Conference dates
GENERAL CHAIR/CO-CHAIR
Shyam S Agrawal - KIIT, Gurugram
Amita Dev - IGDTUW, Delhi
TECHNICAL CHAIR/CO-CHAIRS
S.R. Mahadeva Prasanna - IIT Dharwad
Alexey Karpov - SPC RAS
Rodmonga Potapova - MSLU
K. Samudravijaya - KL University
CONTACTS
All correspondence regarding the conference should be addressed to SPECOM 2022 Secretariat
E-mail: specomkiit@kiitworld.in
Web: www.specom.co.in
| |||||||||
3-3-17 | (2022-11-30) Third workshop on Resources for African Indigenous Language (RAIL), Potchefstroom, South Africa Final call for papers
| |||||||||
3-3-18 | (2022-12-13) CfP 18th Australasian International Conference on Speech Science and Technology (SST2022), Canberra, Australia SST2022: CALL FOR PAPERS The Australasian Speech Science and Technology Association is pleased to call for papers for the 18th Australasian International Conference on Speech Science and Technology (SST2022). SST is an international interdisciplinary conference designed to foster collaboration among speech scientists, engineers, psycholinguists, audiologists, linguists, speech/language pathologists and industrial partners. ? Location: Canberra, Australia (remote participation options will also be available) ? Dates: 13-16 December 2022 ? Host Institution: Australian National University ? Deadline for tutorial and special session proposals: 8 April 2022 ? Deadline for submissions: 17 June 2022 ? Notification of acceptance: 31 August 2022 ? Deadline for upload of revised submissions: 16 September 2022 ? Website: www.sst2022.com Submissions are invited in all areas of speech science and technology, including: ? Acoustic phonetics ? Analysis of paralinguistics in speech and language ? Applications of speech science and technology ? Audiology ? Computer assisted language learning ? Corpus management and speech tools ? First language acquisition ? Forensic phonetics ? Hearing and hearing impairment ? Languages of Australia and Asia-Pacific (phonetics/phonology) ? Low-resource languages ? Pedagogical technologies for speech ? Second language acquisition ? Sociophonetics ? Speech signal processing, analysis, modelling and enhancement ? Speech pathology ? Speech perception ? Speech production ? Speech prosody, emotional speech, voice quality ? Speech synthesis and speech recognition ? Spoken language processing, translation, information retrieval and summarization ? Speaker and language recognition ? Spoken dialog systems and analysis of conversation ? Voice mechanisms, source-filter interactions We are inviting two categories of submission: 4-page papers (for oral or poster presentation, and publication in the proceedings), and 1-page detailed abstracts (for poster presentation only). Please follow the author instructions in preparing your submission. We also invite proposals for tutorials, as 3-hour intensive instructional sessions to be held on the first day of the conference. In addition, we welcome proposals for special sessions, as thematic groupings of papers exploring specific topics or challenges. Interdisciplinary special sessions are particularly encouraged. For any queries, please contact sst2022conf@gmail.com.
| |||||||||
3-3-19 | (2023-01-04) SIVA workshop @ Waikoloa Beach Marriott Resort, Hawaii, USA. CALL FOR PAPERS: SIVA'23
| |||||||||
3-3-20 | (2023-01-04) Workshop on Socially Interactive Human-like Virtual Agents (SIVA'23), Waikoloa, Hawaii CALL FOR PAPERS: SIVA'23 Workshop on Socially Interactive Human-like Virtual Agents From expressive and context-aware multimodal generation of digital humans to understanding the social cognition of real humans Submission (to be openened July, 22 2022): https://cmt3.research.microsoft.com/SIVA2023 SIVA'23 workshop: January, 4 or 5 2023, Waikoloa, Hawaii, https://www.stms-lab.fr/agenda/siva/detail/ FG 2023 conference: January 4-8 2023, Waikoloa, Hawaii, https://fg2023.ieee-biometrics.org/ OVERVIEW Due to the rapid growth of virtual, augmented, and hybrid reality together with spectacular advances in artificial intelligence, the ultra-realistic generation and animation of digital humans with human-like behaviors is becoming a massive topic of interest. This complex endeavor requires modeling several elements of human behavior including the natural coordination of multimodal behaviors including text, speech, face, and body, plus the contextualization of behavior in response to interlocutors of different cultures and motivations. Thus, challenges in this topic are two folds—the generation and animation of coherent multimodal behaviors, and modeling the expressivity and contextualization of the virtual agent with respect to human behavior, plus understanding and modeling virtual agent behavior adaptation to increase human’s engagement. The aim of this workshop is to connect traditionally distinct communities (e.g., speech, vision, cognitive neurosciences, social psychology) to elaborate and discuss the future of human interaction with human-like virtual agents. We expect contributions from the fields of signal processing, speech and vision, machine learning and artificial intelligence, perceptual studies, and cognitive and neuroscience. Topics will range from multimodal generative modeling of virtual agent behaviors, and speech-to-face and posture 2D and 3D animation, to original research topics including style, expressivity, and context-aware animation of virtual agents. Moreover, the availability of controllable real-time virtual agent models can be used as state-of-the-art experimental stimuli and confederates to design novel, groundbreaking experiments to advance understanding of social cognition in humans. Finally, these virtual humans can be used to create virtual environments for medical purposes including rehabilitation and training. SCOPE Topics of interest include but are not limited to: + Analysis of Multimodal Human-like Behavior - Analyzing and understanding of human multimodal behavior (speech, gesture, face) - Creating datasets for the study and modeling of human multimodal behavior - Coordination and synchronization of human multimodal behavior - Analysis of style and expressivity in human multimodal behavior - Cultural variability of social multimodal behavior + Modeling and Generation of Multimodal Human-like Behavior - Multimodal generation of human-like behavior (speech, gesture, face) - Face and gesture generation driven by text and speech - Context-aware generation of multimodal human-like behavior - Modeling of style and expressivity for the generation of multimodal behavior - Modeling paralinguistic cues for multimodal behavior generation - Few-shots or zero-shot transfer of style and expressivity - Slightly-supervised adaptation of multimodal behavior to context + Psychology and Cognition of of Multimodal Human-like Behavior - Cognition of deep fakes and ultra-realistic digital manipulation of human-like behavior - Social agents/robots as tools for capturing, measuring and understanding multimodal behavior (speech, gesture, face) - Neuroscience and social cognition of real humans using virtual agents and physical robots IMPORTANT DATES Submission Deadline September, 12 2022 Notification of Acceptance: October, 15 2022 Camera-ready deadline: October, 31 2022 Workshop: January, 4 or 5 2023 VENUE The SIVA workshop is organized as a satellite workshop of the IEEE International Conference on Automatic Face and Gesture Recognition 2023. The workshop will be collocated with the FG 2023 and WACV 2023 conferences at the Waikoloa Beach Marriott Resort, Hawaii, USA. ADDITIONAL INFORMATION AND SUBMISSION DETAILS Submissions must be original and not published or submitted elsewhere. Short papers of 3 pages excluding references encourage submissions of early research in original emerging fields. Long paper of 6 to 8 pages excluding references promote the presentation of strongly original contributions, positional or survey papers. The manuscript should be formatted according to the Word or Latex template provided on the workshop website. All submissions will be reviewed by 3 reviewers. The reviewing process will be single-blinded. Authors will be asked to disclose possible conflict of interests, such as cooperation in the previous two years. Moreover, care will be taken to avoid reviewers from the same institution as the authors. Authors should submit their articles in a single pdf file in the submission website - no later than September, 12 2022. Notification of acceptance will be sent by October, 15 2022, and the camera-ready version of the papers revised according to the reviewers comments should be submitted by October, 31 2022. Accepted papers will be published in the proceedings of the FG'2023 conference. More information can be found on the SIVA website. DIVERSITY, EQUALITY, AND INCLUSION The format of this workshop will be hybrid online and onsite. This format proposes format of scientific exchanges in order to satisfy travel restrictions and COVID sanitary precautions, to promote inclusion in the research community (travel costs are high, online presentations will encourage research contributions from geographical regions which would normally be excluded), and to consider ecological issues (e.g., CO2 footprint). The organizing committee is committed to paying attention to equality, diversity, and inclusivity in consideration of invited speakers. This effort starts from the organizing committee and the invited speakers to the program committee. ORGANIZING COMMITTEE 🌸 Nicolas Obin, STMS Lab (Ircam, CNRS, Sorbonne Université, ministère de la Culture) 🌸 Ryo Ishii, NTT Human Informatics Laboratories 🌸 Rachael E. Jack, University of Glasgow 🌸 Louis-Philippe Morency, Carnegie Mellon University 🌸 Catherine Pelachaud, CNRS - ISIR, Sorbonne Université
| |||||||||
3-3-21 | (2023-01-09) Advanced Language Processing School (ALPS) , Grenoble, France FIRST CALL FOR PARTICIPATION
We are opening the registration for the third Advanced Language Processing School (ALPS), co-organized by University Grenoble Alpes and Naver Labs Europe. *Target Audience* This is a winter school covering advanced topics in NLP, and we are primarily targeting doctoral students and advanced (research) masters. A few slots will also be reserved for academics and persons working in research-heavy positions in industry. *Characteristics* Advanced lectures by first class researchers. A (virtual) atmosphere that fosters connections and interaction. A poster session for attendees to present their work, gather feedback and brainstorm future work ideas. *Speakers* The current list of speakers is: Kyunghyun Cho (New York University, USA); Yejin Choi (University of Washington and Allen Institute for AI, USA); Dirk Hovy (Bocconi University, Italia); Colin Raffel (University of North Carolina at Chapel Hill, Hugging Face, USA); Lucia Specia (Imperial College, UK), François Yvon (LISN/CNRS, France). *Application* To apply to this winter school, please follow the instructions at http://alps.imag.fr/index.php/application/ . The deadline for applying is Sept 16th, and we will notify acceptance on October 3rd. *Contact* Website: http://alps.imag.fr/ E-mail: alps@univ-grenoble-alpes.fr
| |||||||||
3-3-22 | (2023-01-16) Advanced Language Processing School (ALPS), Grenoble, France
| |||||||||
3-3-23 | (2023-04-02) 45th European Conference on Information Retrieval , Dublin, Ireland
++ CALL FOR WORKSHOP PROPOSALS ++
**************************************************************************** 45th European Conference on Information Retrieval
April 2nd – 6th April 2023 – Dublin, Ireland
Website: https://ecir2023.org/ **************************************************************************** ++ Important Dates ++ - Submission deadline: September 19th, 2022 - Acceptance Notification Date: October 9th, 2022 - Workshops day: April 2nd, 2023
++ Overview ++ ECIR 2023 workshops provide a platform for presenting novel ideas and research results in emerging areas in IR in a more focused and interactive way than the conference itself. Workshops can be either a half-day (3.30 hours plus breaks) or a full day (7 hours plus breaks) and are to be onsite. At least one organizer is expected to attend the workshop.
++ List of Topics ++ ECIR 2023 encourages the submission of workshops on the theory, experimentation, and practice of retrieval, representation, management, and usage of textual, visual, audio, and multi-modal information, but proposals aligned with other topics of IR (namely those identified in the general call for papers) are highly welcome as well.
Relevant topics include, but are not limited to:
++ Submission Guidelines ++ Workshop proposals should contain the following information:
Workshop proposals should be prepared using Springer proceedings templates to be found on Springer webpage (https://www.springer.com/gp/computer-science/lncs/conference-proceedings-guidelines), with a maximum length of 8 pages. All proposals should be submitted electronically through the conference submission system (https://easychair.org/conferences/?conf=ecir23) and must be in English. Workshop proposals will be reviewed by the ECIR 2023 workshop committee based on the quality of their proposal, covered topics, relationship to ECIR and likelihood to attract participants. Final decisions will be made by the ECIR workshop co-chairs.
++ Workshop Chairs ++ Ricardo Campos, Polytechnic Institute of Tomar and INESC TEC, Portugal Gianmaria Silvello, University of Padua, Italy
++ Contacts ++ For further information, please contact the ECIR 2023 Workshop chairs by email to ecir2023-workshop@easychair.org
| |||||||||
3-3-24 | (2023-06-04) CfP ICASSP 2023, Rhodes Island, Greece
| |||||||||
3-3-25 | (2023-06-12)) 13th International Conference on Multimedia Retrieval, Thessaloniki, Greece ICMR2023 – ACM International Conference on Multimedia Retrieval
| |||||||||
3-3-26 | (2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
Contact: icphs2023@guarant.cz
|