ISCA - International Speech
Communication Association


ISCApad Archive  »  2024  »  ISCApad #312  »  Events

ISCApad #312

Friday, June 07, 2024 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2024-07-02) CfP 12th Speech Prosody 2024 Conference, Leiden, The Netherlands

Call Speech Prosody 2024

Speech Prosody 2024 (SP2024) will be held in Leiden, The Netherlands (02–05 July 2024). The conference aims to showcase the facets of prosodic variation and their role in the production, comprehension, and acquisition of speech in order to obtain a better understanding of the structure and function of prosody.

The theme includes four subthemes:

  1. Prosody through the lifespan
  2. Typology and cross-linguistic variation
  3. Individual and social variation
  4. Contact-induced variation

The conference will include both thematic sessions, based on the above subthemes, and non-thematic sessions, as well as special sessions, workshops and tutorials.

We welcome submissions on any aspect of prosody in language. Contributions relating to the conference themes, and particularly, submissions by junior researchers, on under-studied languages, and/or with interdisciplinary research methods, are strongly encouraged.

Topics include, but are not limited to:

  • Phonology and phonetics of prosody
  • Prosody and its interfaces with morphology, syntax, and semantics
  • Prosody and pragmatics
  • Rhythm and timing
  • Tone and intonation
  • Interaction between segmental and suprasegmental features
  • Production and perception of prosody
  • Acquisition of first, second, and third language prosody
  • Prosody in neurodevelopmental disorders
  • Prosody and speech and language impairments
  • Assessment of prosody and measures to evaluate prosodic skills
  • Prosody in infant-directed speech, child-directed speech, and elderly speakers
  • Psycholinguistic, cognitive, and neural correlates of prosody
  • Cognitive processing and modelling of prosody
  • Prosody in language contact
  • Prosody of under-resourced languages and dialects
  • Audiovisual and multimodal prosody
  • Prosody of sign language
  • Prosody in language and music
  • Prosody in speaker characterization and recognition
  • Prosody in speech synthesis, recognition, and understanding
  • Forensic voice and language investigation
  • Prosody in computer language learning systems
  • Computational modelling and applications of prosody
 
 
 

 The Speech Prosody Conference https://www.universiteitleiden.nl/sp2024 has announced the workshops, tutorials, and special sessions:

 

Workshops

  • Prosodic features of language learners' fluency (Organizers: Jürgen Trouvain, Bernd Möbius, and Nivja de Jong)
  • Intonation at the crossroads (CROSSIN) (Organizers: Amalia Arvaniti, Riccardo Orrico, Jiseung Kim, Stella Gryllia, Na Hu, and Alanna Tibbs)
  • Prosody in animal and human non-verbal communication: Exploring commonalities and differences (Organizers: Aoju Chen, Marijke Achterberg, Laura Smorenburg, and Jill Thorson)

Tutorials

  • Dynamical Systems Analysis for Speech Prosody: Khalil Iskarous
  • Simultaneous analysis of contours and durations: how to link F0 and intensity contours to corresponding segment or syllable durations within one statistical model: Michelle Gubian
  • Subgroup detection in generalized mixed-effects models (GLMMs) and generalized additive models (GAMs): Marjolein Fokkema

Special Sessions

  • How segments influence prosody in the languages of the world (Organizers: Menghui Shi and Rasmus Puggaard-Rode)
  • Pitch processing in language and music across different populations: Toward an integrated account (Organizers: Xin Wang, Fang Liu, and Peter Pfordresher)
  • Advances in studies on prosodic planning (Organizers: Constantijn van der Burght and Candice Frances)
  • Research at the intersection of breathing and speech prosody (Organizers: Melissa Redford and Susanne Fuchs)
  • Prosody and Code-Switching (Organizers: Antje Muntendam, Yiya Chen, and M Carmen Parafita Couto)
  • Exploring the (dual) role of intonation for linguistic and socio-cognitive functions: Cross-talk between tonal and non-tonal languages (Organizers: Katharina Zahner-Ritter and Yiya Chen)
  • Exploring the potential of Information Theory and Large Language Models in prosodic research and speech technology (Organizers: Zofia Malisz and Sofoklis Kakouros)

Panel

  • Prosody in Tech (Rob Clark, Zack Hodari, TBD, moderated by Nigel Ward)

*********************************

 

Important dates:

Abstract submission deadline

20/12/2023

Full paper submission deadline

07/01/2024

Notification of acceptance (by email)

25/02/2024

Revised paper submission

23/03/2024

 

Please note that the deadline of 20 December 2023 for 200-word abstract submission will not be extended. See the conference website for the submission guidelines and paper templates. 

 

EasyChair submission pagehttps://easychair.org/conferences/?conf=sp2024

 

We hope to see you at Speech Prosody 2024 in Leiden!

 

Laura Smorenburg, on behalf of Yiya, Aoju and Amalia

 

This mail was sent through the SProSIG mailing list, which is for announcements of interest to the speech prosody research community.

 

 

Back  Top

3-1-2(2024-08-31) Young Female* Researchers in Speech Workshop (YFRSW), Kos Island,Greece
Young Female* Researchers in Speech Workshop (YFRSW)
YFRSW is a workshop for female* Bachelor’s and Master’s students currently working in speech science and technology. The workshop aims to promote interest in research in our field among women* who have not yet committed to pursuing a PhD in speech science or technology, but who have already gained research experience at their universities through individual or group projects.
*The workshop is open for marginalized genders, including women, as well as non-binary and gender non-conforming people who are comfortable in a space that is centered on women’s experiences in the speech science and technology community. We aim to offer an inclusive and accessible program. If you are unsure if this workshop is for you, please don’t hesitate to reach out to us!
 
Location: Kos Island, Greece
Date: 31 August 2024
Submission deadline: 11 May 2024 (300 words)
Webpage: https://sites.google.com/view/yfrsw-2024
Contact: Iona Gessinger, Leda Sari, and Georgia Maniati (youngfemaleresearchersinspeech@gmail.com)
Back  Top

3-1-3(2024-09-01) Author kit for Interspeech 2024 Kos Island, Greece

Dear colleagues,

Further to previous announcements, we wish to draw your attention to the availability of the author kit which should be used for the preparation of all papers submitted to Interspeech 2024, being held at the Kos International Convention Centre, 1-5 September 2024, Kos Island, Greece.

More information about the paper submission process, including the author kit, can be found at the Interspeech 2024 website:

  https://interspeech2024.org/paper-submission/

All submissions should comply with the instructions and specifications contained therein, as well as with the ISCA Code of Ethics for Authors and the Submission Policy.  Authors are invited to pay particular attention to the author anonymity requirements and the anonymity period during which non-anonymised versions of submitted papers should NOT be made available online.

We look forward to seeing you later this year in Kos.

Kind regards,

Jean-François Bonastre
Luciana Ferrer
Reinhold Häb-Umbach
Interspeech 2024 TPC Chairs

Back  Top

3-1-4(2024-09-01) Cf Show&Tell for Interspeech 2024, Kos Island, Greece

Appel à Show&Tell pour Interspeech 2024

Dates importantes
Propositions de Show&Tell dues : 22 avril 2024
Notification d'acceptation/rejet : 14 mai 2024
Article final et vidéo finale : 11 juin 2024

The Show&Tell instructions are on the website now: https://interspeech2024.org/show-and-tell/

 


Les soumissions pour Show&Tell sont sollicitées pour Interspeech 2024 https://interspeech2024.org/

 

Interspeech est la conférence la plus grande et la plus complète au monde sur la science et la technologie du traitement du langage parlé. Un ajout important aux sessions régulières et spéciales sont les démonstrations Show&Tell, où les participants ont la possibilité de présenter des démonstrations engageantes et interactives aux participants à la conférence. Les contributions doivent mettre en évidence les innovations scientifiques ou technologiques d'un concept pertinent pour Interspeech et peuvent concerner un article régulier. Les démonstrations doivent être basées sur des innovations et des recherches fondamentales dans les domaines de la communication vocale, de la production vocale, de la perception, de l'acquisition ou des technologies de la parole et du langage. Le thème d'Interspeech 2024 est La parole et au-delà . À côté des thèmes traditionnels d'Interspeech, ce thème élargit le champ d'application à la liste non exhaustive de sujets suivants : parole et santé, reconnaissance et compréhension de la voix animale, parole pour la mémoire et le patrimoine, communication vocale à travers les âges et interaction homme-machine, y compris jeux, réalité virtuelle et augmentée et audition de robots.

Les propositions, ainsi que toutes les questions, doivent être soumises aux présidents de Show&Tell : Eli Tzirkel (GM, Israël) et Ofer Schwartz (CEVA, Israël) à show-tell@interspeech2024.org

LIEU

Interspeech 2024 aura lieu sur l'île de Kos, en Grèce, au Kipriotis Hotels & Conference Center (KICC)

 

Pour rester informé, nous invitons les participants potentiels à visiter régulièrement le site Web www.interspeech2024.org et à contacter IS24 PCO, Ortra, à interspeech2024@ortra.com et/ou les coordinateurs de conférence ISCA à conferences@isca-speech.org .

Back  Top

3-1-5(2024-09-01) Late informations on Interspeech 2024, Kos Island, Greece.

Keynote Speakers
Join us at INTERSPEECH 2024 to hear from our distinguished keynote speakers: ISCA Medalist Prof. Isabel Trancoso, Dr. Barbara Tillmann, Prof. Dr. -Ing. Elmar Nöth, and Dr. Shoko Araki. In their thought-provoking sessions, they will explore the theme 'Speech and Beyond,' addressing everything from responsible speech processing to the complexities of conversational speech technology.

Updated Information on the IS24 Website
The INTERSPEECH 2024 website is constantly updated with the latest information for attendees. Ensure you review the available tutorials and special sessions/challenges and explore the satellite events that are now accessible online.
Also, be sure to check the IS24 accommodation options, which will be ready for booking once registration opens.

 
Back  Top

3-1-6(2025-08-17) Interspeech 2025, Rotterdam, The Netherlands

INTERSPEECH 2025
Rotterdam, The Netherlands, 17-22 August 2025
Chairs: Odette Scharenborg, Khiet Truong and Catha Oertel
26th INTERSPEECH event

Back  Top

3-1-7(2026) Interspeech 2026, Australia

The Australasian Speech Science and Technology Association is honoured to have been selected to host INTERSPEECH 2026. Our theme of Diversity & Equity ? Speaking Together strongly reflects Sydney and our broader region. Sydney is Oceania?s largest city and is also its most linguistically diverse: more than 300 different languages are spoken and 40% of Sydneysiders speak a language other than English at home. Consistent with the goals of ISCA ?to promote, in an international world-wide context, activities and exchanges in all fields related to speech communication science and technology?, INTERSPEECH Sydney will highlight the diversity of research in our field with a firm focus on equity and inclusivity. Recognizing the importance of multi-dimensional approaches to speech, INTERSPEECH 2026 will foster greater interdisciplinarity to better inform current and future work on speech science and technology. We look forward to welcoming all to Sydney!


Back  Top

3-1-8(2027) Interspeech 2027 Sao Polo, Brazil

The ISCA Board has decided to award the organisation of Interspeech 2027 to Sao Paolo, Brazil. We are very excited to introduce researchers from all over the world to the South American continent for the first time.

Back  Top

3-1-9ISCA INTERNATIONAL VIRTUAL SEMINARS

 

Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers:  see

 

https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars

ISCA INTERNATIONAL VIRTUAL SEMINARS

A seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However,  presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.

ISCA has set up a pool of speakers prepared to give on-line talks. In this way we can enhance the experience of students working in our field, often in difficult conditions. To find details of the speakers,

  • visit isca-speech.org
  • Click Distinguished Lecturers in the left panel
  • Online Seminars then appears beneath Distinguished Lecturers: click that.

Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials.

If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org

If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org


PS. The online seminar scheme  is now up and running, with 7 speakers so far:

 

Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.



Back  Top

3-1-10Speech Prosody courses

Dear Speech Prosody SIG Members,

We would like to draw your attention to three upcoming short courses from the Luso-Brazilian Association of Speech Sciences:

- Prosody & Rhythm: applications to teaching rhythm,
  Donna Erickson (Haskins), March 16, 19, 23 and 26

- Prosody, variation and contact,
  Barbara Gili Fivela (University of Salento, Italy), April 19, 21, 23, 26 and 28

- Rhythmic analysis of languages: main challenges,
  Marisa Cruz (University of Lisbon), June 2, 3, 4, 7, 8 and 10

For details:
  http://www.letras.ufmg.br/padrao_cms/index.php?web=lbass&lang=2&page=3670&menu=&tipo=1
 
 
 
Plinio Barbosa and Nigel Ward

Back  Top

3-2 ISCA Supported Events
3-2-1(2024-07-06) Summer School on Automatic Speech Recognition (ASR), DA-IICT Gandhinagar, India
We are happy to inform you that we are organizing ISCA supported Summer School on Automatic Speech Recognition (ASR) during July 06-10, 2024. This event provides a forum for students, researchers, and industry professionals to enhance their background and get exposed to evolving focused research areas in the field of ASR. The event is sponsored by ISCA, Google, DA-IICT, IndSCA, and BHASHINI.

ASR is a highly multidisciplinary field and it deals with recognizing the linguistic context from speech or converting speech into text with the help of machines-a key component of commercially successful Voice Assistants, such as Apple Siri, Microsoft Cortana, Google Assistant, Amazon Alexa, Samsung Bixby, IBMs Watson, etc. The design of the ASR system depends upon various factors, such as near-field vs. far-field speech, recording and transmission channel conditions, acoustic model, language model, signal degradation conditions (acoustic noise), etc. Understanding these technological challenges is the major goal of S4P 2024. The event will have four experts from abroad and six experts from India to present recent developments in their respective research topics that are related to the theme of Summer School. The experts chosen are Hynek Hermansky (Johns Hopkins University, USA), Bhuvana Ramabhadran (Google, USA), Mathew Magimal Doss (IDIAP, Switzerland), Chng Eng Siong (NTU Singapore), B. Yegnanarayana (Retd. IIT Madras), C. V. Jawahar (IIIT Hyderabad), Sriram Ganapathy (IISc , Bengaluru), Preethi Jyothi (IIT Bombay), and Aparna Walanj (KDAH-MRI, Mumbai).

In addition, the Summer School will also have a special session on Industry Perspective Talks, where speakers are: Tara N. Sainath (Google USA), Sunayana Sitaram (Microsoft Research, Bengaluru), Harish Arsikere (Amazon, Bengaluru), Vikram C. M. (Samsung Research Institute, Bengaluru), Hardik B. Sailor (I2R, Singapore), K. Sunilkumar (TCS Innovation Labs, Mumbai), Nirmesh J. Shah (Sony Research, India), Amitabh Nag (BHASHINI, MeitY, New Delhi), and Dipesh K. Singh (Augnito, Mumbai). The program committee
of S4P 2024 includes internationally well-known experts from 18 countries across the world. The S4P 2024 also includes 5th edition of 5 minute Ph.D. thesis (5MPT) contest, which provides doctoral scholars an opportunity to showcase their research work before eminent researchers both from academia and industry. Four best presentations by the scholars during 5MPT will be awarded Google-endorsed cash prizes. Further, we are also providing the Google Travel Grants and IndSCA Travel Grants to 50 and 25 student participants, respectively. We are enclosing a poster that describes the outline of the event and call for participation. 
 
I am writing this letter anticipating that you will participate in this event and I am sure, your participation will make the event enriching. I
would also request you to encourage your Post-Doctoral Fellows, PhD Scholars, M.Tech./BTech. Students, Research Associates, and
Faculty Colleagues to submit their application for participation. We will appreciate it very much if you can arrange to place the poster on
the notice board of your Department / Institution / University / R&D Laboratory.  

I look forward to hearing from you.

With best regards,

Prof. (Dr.) Hemant A. Patil, Professor and Placement Convenor, DA-IICT, Gandhinagar, India.
On behalf of the Organizing Committee, Summer School on Automatic Speech Recognition, July 06-10, 2024.
Associate Editor, IEEE Signal Processing Magazine 2021-2023.
ISCA Distinguished Lecturer 2020-2022 and APSIPA Distinguished Lecturer 2018-2019
Speech Research Lab @ DA-IICT Gandhinagar https://sites.google.com/site/speechlabdaiict/
Back  Top

3-2-2(2024-09-09)Twenty-seventh International Conference on TEXT, SPEECH and DIALOGUE (TSD 2024), Brno, Czech Republic
 ************************************************** *******
                     TSD 2024 - DEUXIÈME APPEL À COMMUNICATIONS
         ************************************* * *********************

Vingt-septième Conférence internationale sur le TEXTE, la PAROLE et le DIALOGUE (TSD 2024)
              Brno, République tchèque, 9-13 septembre 2024
                    http://www.tsdconference.org/

La conférence est organisée par la Faculté d'informatique de
l'Université Masaryk de Brno et la Faculté des sciences appliquées de l'Université de
Bohême occidentale de Pilsen. La conférence est soutenue par l'International
Speech Communication Association.

Lieu : Brno, République tchèque


LES DATES LIMITES DE SOUMISSION :

    10 avril 2024 ............ Soumission des curriculum vitae
    17 avril 2024 ............ Soumission des articles complets

Soumission des curriculum vitae sert uniquement à une meilleure organisation du
processus de révision - pour la révision proprement dite, une soumission complète sur papier est
nécessaire. Il est toujours possible de soumettre les deux avant la date limite de soumission des documents.


CONFÉRENCIERS PRINCIPAUX

    Hynek Hermansky, Université Johns Hopkins, États-Unis
    Preslav Nakov, Université d'intelligence artificielle Mohamed bin Zayed, Abu Dhabi


SÉRIE TSD

La série TSD est devenue un forum privilégié d'interaction entre les chercheurs en
traitement du langage parlé et écrit du monde entier.
Les actes de TSD forment un livre publié par Springer-Verlag dans leur
série Lecture Notes in Artificial Intelligence (LNAI). Les actes du TSD
sont régulièrement indexés par Thomson Reuters Conference Proceedings Citation
Index/Web of Science. De plus, les séries LNAI sont répertoriées dans toutes les principales
bases de données de citations telles que DBLP, SCOPUS, EI, INSPEC ou COMPENDEX.


APPEL À PROPOSITIONS D'ATELIER SATELLITE
https://www.tsdconference.org/tsd2024/conf_workshop_proposals.html

La conférence TSD 2024 sera accompagnée d'ateliers satellites d'une journée
ou de réunions de projet avec le soutien organisationnel du
comité d'organisation du TSD. Le comité d'organisation peut organiser une salle de réunion sur le
lieu de la conférence et préparer les actes de l'atelier sous forme de livre avec ISBN par
un éditeur local. Les documents de l'atelier qui passentont également le
processus d'examen standard du TSD apparaîtront dans les actes de Springer. Chaque atelier fait
l'objet d'une proposition qui doit être envoyé via le formulaire de soumission de proposition
ou discutez via l'e-mail de contact  tsd2024@tsdconference.org  avant la
date limite respective.


SUJETS

Les sujets de la conférence comprendront (sans s'y limiter) :

    Corpus et ressources linguistiques (
    corpus monolingues, multilingues, textes et parlés, grands corpus web, grands modèles linguistiques,
    homonymie, lexiques spécialisés, dictionnaires)

    Reconnaissance vocale (parole multilingue, continue, émotionnel
    , locuteur handicapé, mots hors vocabulaire,
    parole alternative méthode d'extraction de caractéristiques, nouveaux modèles de
    modélisation acoustique et linguistique)

    Marquage, classification et analyse du texte et de la parole
    (analyse morphologique et syntaxique, synthèse et
    désambiguïsation, traitement multilingue, analyse des sentiments,
    analyse de la crédibilité, étiquetage automatique du texte, curriculum vitae,
    attribution de la paternité)

    Génération de la parole et du langage parlé (synthèse vocale multilingue et haute
    fidélité, chant informatique)

    Traitement sémantique du texte et de la parole (
    extraction d'informations, récupération d'informations, exploration de données, web sémantique,
    représentation des connaissances, inférence, ontologies,
    désambiguïsation des sens, détection de plagiat, fausses nouvelles détection)

    Intégration d'applications de traitement du texte et de la parole
    (traduction automatique, compréhension du langage naturel,
    stratégies de réponse aux questions, technologies d'assistance)

    Systèmes de dialogue automatique (auto-apprentissage, multilingue,
    systèmes de questions-réponses, stratégies de dialogue, prosodie dans
    les dialogues)

    Techniques multimodales et Modélisation (traitement vidéo,
    animation faciale, synthèse visuelle de la parole, modélisation des utilisateurs,
    modélisation des émotions et de la personnalité)

Les communications sur le traitement de langues autres que l'anglais sont fortement
encouragées.


COMITÉ DU PROGRAMME

    Elmar Noeth, Allemagne (président général)
    Rodrigo Agerri, Espagne
    Eneko Agirre, Espagne
    Vladimir Benko, Slovaquie
    Archna Bhatia, États-Unis
    Jan Cernocky, République tchèque
    Simon Dobrisek, Slovénie
    Kamil Ekstein, République tchèque
    Karina Evgrafova, Russie
    Yevhen Fedorov, Ukraine
    Volker Fischer , Allemagne
    Darja Fiser, Slovénie
    Lucie Flek, Allemagne
    Bjorn Gamback, Norvège
    Radovan Garabik, Slovaquie
    Alexander Gelbukh, Mexique
    Louise Guthrie, États-Unis
    Jan Hajic, République tchèque
    Eva Hajicova, République tchèque
    Yannis Haralambous, France
    Hynek Hermansky, États-Unis
    Ales Horak, République tchèque
    Eduard Hovy, États-Unis
    Milos Jakubicek, République tchèque
    Maria Khokhlova, Russie
    Aidar Khusainov, Russie
    Daniil Kocharov, Russie
    Miloslav Konopik, République tchèque
    Valia Kordoni, Allemagne
    Evgeny Kotelnikov, Russie
    Pavel Kral, République tchèque
    Siegfried Kunzmann, États-Unis
    Nikola Ljubesic , Croatie
    Natalija Loukachevitch, Russie
    Bernardo Magnini, Italie
    Vaclav Matousek, République tchèque
    Roman Moucek, République tchèque
    Agnieszka Mykowiecka, Pologne
    Hermann Ney, Allemagne
    Joakim Nivre, Suède
    Juan Rafael Orozco-Arroyave, Colombie
    Maciej Piasecki, Pologne
    Josef Psutka, République tchèque
    James Pustejovsky , États-Unis
    German Rigau, Espagne
    Paolo Rosso, Espagne
    Leon Rothkrantz, Pays-Bas
    Anna Rumshisky, États-Unis
    Milan Rusko, Slovaquie
    Pavel Rychly, Tchéquie
    Mykola Sazhok, Ukraine
    Pavel Skrelin, Russie
    Pavel Smrz, République tchèque
    Petr Sojka, République tchèque
    Georg Stemmer, Allemagne
    Marko Robnik Sikonja, Slovénie
    Marko Tadic, Croatie
    Jan Trmal, République tchèque
    Tamas Varadi, Hongrie
    Zygmunt Vetulani , Pologne
    Aleksander Wawer, Pologne
    Pascal Wiggers, Pays-Bas
    Alina Wroblewska, Pologne
    Jerneja Zganec Gros, Slovénie


FORMAT DE LA CONFÉRENCE

Le le programme de la conférence comprendra la présentation d'articles invités,
des présentations orales et des séances d'affiches/démonstrations. Les communications seront
présentées en séance plénière ou en séances thématiques.

Des événements sociaux, notamment un voyage dans les environs de Brno, permettront
des interactions informelles supplémentaires.


SOUMISSION DES ARTICLES

Les auteurs sont invités à soumettre un article complet ne dépassant pas 12 pages
formatées dans le style LNCS (y compris les références). Les candidats retenus
seront présentés oralement ou sous forme de posters. La décision concernant le
format de présentation sera basée sur la recommandation des
évaluateurs. Les auteurs sont invités à soumettre leurs articles en utilisant le
formulaire en ligne accessible depuis le site Internet de la conférence.

Les articles soumis au TSD 2024 ne doivent pas être examinés par une autre
conférence ou publication pendant le cycle d'examen du TSD, et ne doivent pas être
préalablement publiés ou acceptés pour publication ailleurs.

Comme la révision sera aveugle, l'article ne doit pas inclure les
noms et affiliations des auteurs. En outre, les auto-références qui révèlent l'
identité de l'auteur, par exemple « Nous avons déjà montré (Smith, 1991)… »,
devraient être évitées. Utilisez plutôt des citations telles que « Smith a déjà
montré (Smith, 1991)… ». Les articles non conformes aux
exigences ci-dessus seront susceptibles d'être rejetés sans examen.

Le format papier à examinateur doit être un fichier PDF avec toutes les politiques requises
incluses. Dès notification d'acceptation, les présentateurs présentent de plus amples
informations sur la soumission de leurs sources électroniques et prêtes à photographie (pour
des instructions détaillées sur le format final de l'article, voir
https://www.tsdconference.org/tsd2024 /paper_instr.html ).

Les auteurs sont également invités à présenter des projets réels,
des logiciels développés ou du matériel intéressant en rapport avec les sujets de la
conférence. Les présentateurs de démonstrations doivent fournir un
curriculum vitae ne dépassant pas une page. Les résumés des démonstrations ne
figureront pas dans les actes de la conférence.


DATES IMPORTANTES

10 avril 2024 ............ Soumission des curriculum vitae
17 avril 2024 ............ Soumission des articles complets
5 juin 2024 ........ ...... Notification d'acceptation
15 juin 2024 .............. Documents finaux (appareil photo prêt) et inscription
8 août 2024 ........... Soumission des curriculum vitae de démonstration
15 août 2024 ........... Notification d'acceptation des
                           démonstrations envoyées aux auteurs
du 9 au 13 septembre 2024 ...... Date de la conférence La soumission des curriculum vitae sert uniquement

à une meilleure organisation du processus d'évaluation
- pour l'examen proprement dit, une soumission complète du document est
nécessaire.

Les contributions acceptées à la conférence seront publiées dans
les actes de Springer qui seront mis à la disposition des participants au moment
de la conférence.


LANGUE OFFICIELLE

La langue officielle de la conférence est l'anglais.


HÉBERGEMENT

Le comité d'organisation organisateura des réductions sur l'hébergement dans
l'hôtel 4 étoiles sur le lieu de la conférence. Les prix actuels de l'
hébergement seront disponibles sur le site Web de la conférence.


ADRESSE

Toute correspondance concernant la conférence doit être
envoyée à
   
    Ales Horak, TSD 2024
    Faculté d'informatique, Université Masaryk
    Botanicka 68a, 602 00 Brno, République tchèque
    téléphone : +420-5-49 49 18 63
    fax : +420-5-49 49 18 20
    email :  tsd2024@tsdconference.org

La page d'accueil officielle du TSD 2024 est :  http://www.tsdconference.org/tsd2024


LIEU

Brno est la deuxième plus grande ville de la République tchèque avec une
population de près de 400 000 habitants et est le siège du pouvoir judiciaire et
centre de foire. Brno est la capitale de la Moravie du Sud, située
dans le sud-est de la République tchèque et connue
pour son large éventail de sites culturels, naturels et techniques.
La Moravie du Sud est une région viticole traditionnelle. Brno est une
ville royale depuis 1347 et constitue avec ses six universités un
centre culturel de la région.

Brno est facilement accessible par vols directs depuis Londres et Milan,
ainsi que par trains ou bus depuis Vienne (150 km) ou Prague (230 km).

Pour les participants disposant d'un peu de temps supplémentaire, les lieux à proximité peuvent
également être intéressants. Les locaux comprennent : le château de Brno, aujourd'hui appelé
Spilberk, le château de Veveri, l'ancien et le nouvel hôtel de ville, le
monastère des Augustins avec l'église Saint-Thomas et la crypte des
margraves moraves, l' église Saint-Jacques, la cathédrale Saint-Pierre et Paul,
le monastère cartésien. à Kralovo Pole, la célèbre Villa Tugendhat
(UNESCO) conçue par Mies van der Rohe ainsi que d'autres
bâtiments importants de l'architecture tchèque de l'entre-deux-guerres.

Pour ceux qui souhaitent s'aventurer hors de Brno, le Karst morave avec
le gouffre de Macocha et les grottes de Punkva, le champ de bataille de la bataille des
trois empereurs (Napoléon, Alexandre russe et François autrichien
- Bataille d'Austerlitz), le château de Slavkov (Austerlitz),
le château de Pernstejn, Buchlov Le château, le château de Lednice,
le château de Buchlovice, le château de Letovice, Mikulov avec l'un des plus grands
cimetières juifs d'Europe centrale, Telc - une ville inscrite sur la
liste du patrimoine de l'UNESCO, et bien d'autres sont tous facilement accessibles.
 
Back  Top

3-2-3(2024-09-18) 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL), Kyoto, Japan
The 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) will be held in Kyoto, Japan on September 18-20, 2024. SIGDIAL will be co-located with INLG which will take place after SIGDIAL in Tokyo, Japan.

The SIGDIAL venue provides a regular forum for the presentation of cutting-edge research in dialogue and discourse to both academic and industry researchers, continuing a series of 24 successful previous meetings. The conference is sponsored by the SIGDIAL organization - the Special Interest Group in discourse and dialogue for ACL and ISCA.

* Topics of Interest *

We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:

  -   Discourse Processing: Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval. Discourse issues in text generated by large language models.
  -   Dialogue Systems: Task oriented and open domain spoken, multimodal, embedded, situated, and text-based dialogue systems, their components, evaluation and applications, Knowledge representation and extraction for dialogue, State representation, tracking and policy learning. Social and emotional intelligence, Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue, Style, voice, and personality. Safety and ethics issues in Dialogue.
  -   Corpora, Tools and Methodology: Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.
  -   Pragmatic and Semantic Modeling: Pragmatics and semantics of conversations (i.e., beyond a single sentence), e.g., rational speech act, conversation acts, intentions, conversational implicature, presuppositions.
  -   Applications of Dialogue and Discourse Processing Technology.

* Special Session *

SIGDIAL 2024 invites work on the special session “GEMINI - Graph-based knowledge for Modelling Intelligent Natural Interaction” that focuses on knowledge and knowledge modeling for dialogue systems, in particular on the opportunities and challenges for enhancing and stabilizing dialogue capabilities of chatbots, robots, and virtual agents with the use of LLMs.

* Submissions *

The program committee welcomes the submission of long papers, short papers, and demo descriptions. Submitted long papers may be accepted for oral or for poster presentation. Accepted short papers will be presented as posters.

  -   Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be no longer than 8 pages, including title, text, figures and tables. An unlimited number of pages is allowed for references and appendices, and an extra page is allowed in the final version to address reviewers’ comments.
  -   Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead, short papers should have a point that can be made in a few pages, such as a small, focused contribution; a negative result; or an interesting application nugget. Short papers should be no longer than 4 pages including title, text, figures and tables. An unlimited number of pages is allowed for references and appendices, and an extra page is allowed in the final version to address reviewers’ comments.
  -   Demo descriptions should be no longer than 4 pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.

Note that content that is an important part of the contribution or that is important for the reviewers to assess the technical correctness of the work should be a part of the main paper, and not appear in appendices. Reviewers are not required to consider material in appendices.

Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.

* Multiple Submissions *

SIGDIAL 2024 cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of SIGDIAL. Any questions regarding submissions can be sent to program-chairs [at] sigdial.org.

* Blind Review *

Building on previous years’ move to anonymous long and short paper submissions, SIGDIAL  2024 will follow the ACL policies for preserving the integrity of double-blind review (see author guidelines: https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.

* Submission Format *

All long, short, and demonstration submissions must follow the two-column ACL format, which are available as an Overleaf template (https://www.overleaf.com/read/crtcwgxzjskr) and also downloadable directly (Latex and Word) (https://github.com/acl-org/acl-style-files).

Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

* Submission Deadline *

SIGDIAL will accept regular submissions through the Softconf/START system, as well as commitment of already reviewed papers through the ACL Rolling Review (ARR) system.

* Regular submission *

Authors have to fill in the submission form in the Softconf/START system and upload an initial pdf of their papers before May 17, 2024 (23:59 GMT-11). Details and the submission link will be posted on the conference website (https://2024.sigdial.org/).

Submission via ACL Rolling Review (ARR, https://aclrollingreview.org/)

Please refer to the ARR Call for Papers (https://aclrollingreview.org/cfp ) for detailed information about submission guidelines to ARR. The commitment deadline for authors to submit their reviewed papers, reviews, and meta-review to SIGDIAL 2024 is June 19, 2024. Note that the paper needs to be fully reviewed by ARR in order to make a commitment, thus the latest date for ARR submission will be April 15, 2024.

* Mentoring *

Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.

* Best Paper Awards *

In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2024 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.


SIGDIAL 2024 Program Committee
Vera Demberg and Stefan Ultes
Conference Website: https://2024.sigdial.org/

_______________________________________________
 
Back  Top

3-2-4(2025) Call for Bids for 3rd International Conference on Tone and Intonation (TAI 2025)

Dear colleagues,

Following the success of the 2nd International Conference on Tone and Intonation (TAI 2023) <www.tai2023.org> held in Singapore 18-20 November 2023, the TAI Standing Committee is now inviting bids to host the 3rd TAI conference in 2025.

As the merger of the TAL (Tonal Aspects of Languages) and TIE (Tone and Intonation in Europe) conference series (2004–2018), the TAI series inherits features from its predecessors. The conference accepts 2-page abstracts, avoids parallel sessions, and typically hosts 3 to 5 invited keynote speakers. Following the conference, participants may choose to submit their optional 5-page full papers for publication in the ISCA online archives. With a focus on linguistic rather than technological aspects of prosody, TAI is affiliated with both the International Speech Communication Association (ISCA) and the International Phonetic Association (IPA), thereby receiving grants/awards from both organizations. TAI may also feature workshops at the same venue outside the conference program, before, after or during the conference. Given that all previous TIE conferences were held in Europe, the aim for the new TAI series is to rotate between Europe and other locations. Consequently, it is preferable for TAI 2025 to take place in Europe. While the initial two TAIs were held in November/December, the scheduling of future TAI conferences remains flexible, with spring or summer generally preferred, contingent on the preferences of local organizers.

The responsibility of the TAI Standing Committee is to promote the continuation of the series, select the venue for the next conference, and evaluate conferences and any proposals for format changes. The committee is composed of former TAL/TIE and TAI organizers. Its current members are Amalia Arvaniti, Yiya Chen, Christian DiCanio, Minghui Dong, Wentao Gu, Carlos Gussenhoven, Yanfeng Lu, Hansjörg Mixdorff, and Oliver Niebuhr. All bids will be discussed and voted on by the committee.

Bids should minimally provide the following information:

- Host institution
- General chair (and any co-organizers)
- Proposed conference dates
- Proposed conference theme
- Venue or general location of the venue
- Transportation and accommodation
- Estimated full and student registration fees
- Financial plan (including existing or potential funding sources)

Any additional information that may support the bid is welcome.

Bids should be submitted to the current chair of the TAI Standing Committee, Prof. Wentao Gu (wtgu@gavo.t.u-tokyo.ac.jp) by 30 April 2024. He is also available for additional information. Please feel free to distribute this announcement to any researchers in the speech prosody community who may be interested. Thank you very much!!

Back  Top

3-3 Other Events
3-3-1(2024-06-10) 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD’24), Phuket, Thailand,

3e atelier international de l'ACM sur l'IA multimédia contre la désinformation (MAD'24)
Conférence internationale de l'ACM sur la récupération multimédia ICMR'24
Phuket, Thaïlande, 10-13 juin 2024
https://www.mad2024.aimultimedialab.ro/ https://easychair .org/my/conference?conf=mad2024 *** Appel à communications *** * Soumission des articles : 17 mars 2024 * Notification d'acceptation : 7 avril 2024 * Articles prêts à photographier : 25 avril 2024 * Atelier @ ACM ICMR 2024 : 10 juin 2024  La communication moderne ne repose plus uniquement sur les médias classiques comme les journaux ou la télévision, mais s'effectue plutôt sur les réseaux sociaux, en temps réel et avec des interactions en direct entre les utilisateurs. Toutefois, l’accélération de la quantité d’informations disponibles a également conduit à une augmentation de la quantité et de la qualité des contenus trompeurs, de la désinformation et de la propagande. A l'inverse, la lutte contre la désinformation, à laquelle participent quotidiennement agences de presse et ONG (entre autres) pour éviter les risques de distorsion de l'opinion des citoyens, est devenue encore plus cruciale et exigeante, notamment pour ce qui concerne des sujets sensibles comme la politique. , la santé et la religion. Les campagnes de désinformation exploitent, entre autres, des outils basés sur l'IA pour la génération et la modification de contenu : des contenus visuels, vocaux, textuels et vidéo hyperréalistes ont émergé sous le nom collectif de « deepfakes », et plus récemment avec l'utilisation de grands modèles linguistiques. (LLM) et les grands modèles multimodaux (LMM), minant la crédibilité perçue du contenu médiatique. Il est donc encore plus crucial de contrer ces avancées en concevant de nouveaux outils d’analyse capables de détecter la présence de contenus synthétiques et manipulés, accessibles aux journalistes et aux vérificateurs de faits, robustes et fiables, et éventuellement basés sur l’IA pour atteindre de plus grandes performances. Les futures recherches multimédias sur la détection de la désinformation reposent sur la combinaison de différentes modalités et sur l’adoption des dernières avancées en matière d’approches et d’architectures d’apprentissage profond. Cela soulève de nouveaux défis et questions qui doivent être résolus afin de réduire les effets des campagnes de désinformation. L'atelier, dans sa troisième édition, accueille les contributions liées à différents aspects de la détection, de l'analyse et de l'atténuation de la désinformation basée sur l'IA. Les sujets d'intérêt comprennent, sans s'y limiter : - Détection de la désinformation dans le contenu multimédia (par exemple, vidéo, audio, textes, images) - Méthodes de vérification multimodales - Détection des médias synthétiques et manipulés - Forensique multimédia - Diffusion et effets de la désinformation dans les médias sociaux - Analyse des campagnes de désinformation dans des domaines socialement sensibles - Robustesse de la vérification des médias contre les attaques contradictoires et les complexités du monde réel   


























- Équité et non-discrimination de la détection de la désinformation dans le contenu multimédia
- Expliquer les technologies de désinformation/détection de désinformation aux utilisateurs non experts
- Aspects temporels et culturels de la désinformation
- Partage et gouvernance des ensembles de données dans l'IA pour la désinformation
- Ensembles de données pour la détection de la désinformation et la vérification multimédia
- Ouvert ressources, par exemple, ensembles de données, outils logiciels
- Grands modèles linguistiques pour analyser et atténuer les campagnes de désinformation
- Grands modèles multimodaux pour la vérification des médias
- Systèmes et applications de vérification multimédia
- Techniques de fusion, d'assemblage et de fusion tardive de systèmes
- Cadres d'analyse comparative et d'évaluation


*** Directives de soumission ***

Lors de la préparation de votre soumission, veuillez respecter strictement les instructions de l'ACM ICMR 2024, afin de garantir la pertinence du processus d'examen et l'inclusion dans les actes de la bibliothèque numérique de l'ACM. Les instructions sont disponibles ici :  https://mad2024.aimultimedialab.ro/submissions/ .



*** Comité d'organisation ***

Cristian Stanciu, Université Politehnica de Bucarest, Roumanie
Luca Cuccovillo, Fraunhofer IDMT, Allemagne
Bogdan Ionescu, Université Politehnica de Bucarest, Roumanie
Giorgos Kordopatis-Zilos, Université technique tchèque de Prague, Tchéquie
Symeon Papadopoulos, Centre pour Research and Technology Hellas, Thessalonique, Grèce
Adrian Popescu, CEA LIST, Saclay, France
Roberto Caldelli, CNIT et Mercatorum University, Italie

L'atelier est soutenu dans le cadre du projet H2020 AI4Media - A European Excellence Centre for Media, Society and Democracy ( https:/ /www.ai4media.eu/ ), le projet Horizon Europe  vera.ai  - VERification Assisted by Artificial Intelligence ( https://www.veraai.eu/ ) et le projet Horizon Europe AI4Debunk - Outils participatifs d'assistance basés sur l'IA pour soutenir une ligne de confiance Activité des citoyens et démystification de la désinformation ( https://ai4debunk.eu/ ).


Au nom des organisateurs,

Cristian Stanciu
https://www.aimultimedialab.ro/

Back  Top

3-3-2(2024-06-10) ACM International Conference on Multimedia Retrieval, Dusit Thani Laguna Phuket, Phuket Island, Thailand,
Effectively and efficiently retrieving information based on user needs
is one of the most exciting areas in multimedia research. The Annual
ACM International Conference on Multimedia Retrieval (ICMR) offers a
great opportunity for exchanging leading-edge multimedia retrieval
ideas among researchers, practitioners and other potential users of
multimedia retrieval systems. ACM ICMR 2024 will take place in Phuket,
Thailand from the 10-13th June 2024. The conference venue is the Dusit
Thani Laguna Phuket, in Phuket Island.

ACM ICMR 2024 is calling for high-quality original papers addressing
innovative research in multimedia retrieval and its related broad
fields. The main scope of the conference is not only the search and
retrieval of multimedia data but also analysis and understanding of
multimedia contents, including community-contributed social data,
lifelogging data and automatically generated sensor data, integration
of diverse multimodal data, deep learning-based methodology and
practical multimedia applications.


Topics of Interest

-Multimedia content-based search and retrieval,
-Multimedia-content-based (or hybrid) recommender systems,
-Large-scale and Web-scale multimedia retrieval,
-Multimedia content extraction, analysis, and indexing,
-Multimedia analytics and knowledge discovery,
-Multimedia machine learning, deep learning, and neural networks,
-Relevance feedback, active learning, and transfer learning,
-Fine-grained retrieval for multimedia,
-Event-based indexing and multimedia understanding,
-Semantic descriptors and novel high- or mid-level features,
-Crowdsourcing, community contributions, and social multimedia,
-Multimedia retrieval leveraging quality, production cues, style, framing, and affect,
-Synthetic media generation and detection,
-Narrative generation and narrative analysis,
-User intent and human perception in multimedia retrieval,
-Query processing and relevance feedback,
-Multimedia browsing, summarization, and visualization,
-Multimedia beyond video, including 3D data and sensor data,
-Mobile multimedia browsing and search,
-Multimedia analysis/search acceleration, e.g., GPU, FPGA,
-Benchmarks and evaluation methodologies for multimedia analysis/search,
-Privacy-aware multimedia retrieval methods and systems,
-Fairness and explainability in multimedia analysis/search,
-Legal, ethical, and societal impact of multimedia retrieval research,
-Applications of multimedia retrieval, e.g., news/journalism, media, medicine, sports, commerce, lifelogs, travel, security, and environment.


Important Dates

Regular Paper submission: 01.02.2024
Demo Paper submission: 17.02.2024
Notification of Acceptance: 31.03.2024
Camera-Ready Due: 25.04.2024
Conference: 10 - 13.06.2024

############################

Unsubscribe:

MM-INTEREST-signoff-request@LISTSERV.ACM.ORG

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?GETPW1

Once you have created a password, you can log in and view or change your
subscription settings at:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?SUBED1=MM-INTEREST
 
Back  Top

3-3-3(2024-06-13) 'Mind Your Language' seminar in the NeuroCampus Amphitheater of the Lyon Neuroscience, Bron (Lyon), France
The next 'Mind Your Language' seminar will be held on Thursday June 13 at 4 pm in the NeuroCampus Amphitheater of the Lyon Neuroscience Research Centre (bât. 462, 95 bd Pinel, 69500 Bron).
 
Franck Ramus (senior CNRS researcher at the Laboratoire de Sciences Cognitives et Psycholinguistique, Department of Cognitive Studies, Ecole Normale Supérieure, Paris) will be presenting on 'Genetics of language'.

Abstract:
It has long been hypothesised that the human faculty to acquire a language is in some way encoded in our DNA. However, only recently has genetic evidence been available to begin to substantiate the presumed genetic basis of language. We will review data from statistical and molecular genetic studies showing associations between gene variants and language disorders (in particular developmental dyslexia), and we will further reflect on how the human genome builds a brain that can learn a language.

Here is a link to join us remotely: https://univ-montp3-fr.zoom.us/j/95371655397?pwd=C0hjVtrbDcTdhmbTuHd1Q1cWS9njHe.1
Meeting ID: 953 7165 5397
Passcode: 974510

Back  Top

3-3-4(2024-06-17) 'Madrid UPM Machine Learning and Advanced Statistics' summer school@Boadilla del Monte (Madrid), Spain

The Technical University of Madrid (UPM) will once more organize the 'Madrid UPM Machine Learning and Advanced Statistics' summer school. The summer school will be held in Boadilla del Monte, near Madrid, from June 17th to June 28th. This year's edition comprises 12 week-long courses (15 lecture hours each), given during two weeks (six courses each week). Attendees may register in each course independently. No restrictions, besides those imposed by timetables, apply on the number or choice of courses.

We would like to remind you that early registration for the Madrid UPM Machine Learning and Advanced Statistics summer school is open until June 2nd (included). The summer school will be held in Boadilla del Monte, near Madrid, from June 19th to June 30th. This year's edition comprises 12 week-long courses (15 lecture hours each), given during two weeks (six courses each week). Attendees may register in each course independently. No restrictions, besides those imposed by timetables, apply on the number or choice of courses.

Early registration is now *OPEN*. Extended information on course programmes, price, venue, accommodation and transport is available at the school's website:

http://www.dia.fi.upm.es/MLAS

There is a 25% discount for members of Spanish AEPIA and SEIO societies. 

Please, forward this information to your colleagues, students, and whomever you think may find it interesting.

Best regards,

Pedro Larrañaga, Concha Bielza, Bojan Mihaljević and Laura Gonzalez Veiga.
-- School coordinators.

*** List of courses and brief description ***

* Week 1 (June 17th - June 23rd, 2024) *

1st session: 9:45-12:45
Course 1: Bayesian Networks (15 h)
      Basics of Bayesian networks. Inference in Bayesian networks. Learning Bayesian networks from data. Real applications. Practical demonstration: R.

Course 2: Time Series(15 h)
      Basic concepts in time series. Linear models for time series. Time series clustering. Practical demonstration: R.
     
2nd session: 13:45-16:45
Course 3: Supervised Classification (15 h)
      Introduction. Assessing the performance of supervised classification algorithms. Preprocessing. Classification techniques. Combining multiple classifiers. Comparing supervised classification algorithms. Practical demonstration: python.

Course 4: Statistical Inference (15 h)
      Introduction. Some basic statistical tests. Multiple testing. Introduction to bootstrap methods. Introduction to Robust Statistics. Practical demonstration: R. 

3rd session: 17:00 - 20:00
Course 5: Deep Learning (15 h)
      Introduction. Learning algorithms. Learning in deep networks. Deep Learning for Computer Vision. Deep Learning for Language. Practical session: Python notebooks with Google Colab with keras, Pytorch and Hugging Face Transformers.

Course 6: Bayesian Inference (15 h)
      Introduction: Bayesian basics. Conjugate models. MCMC and other simulation methods. Regression and Hierarchical models. Model selection. Practical demonstration: R and WinBugs.
     

* Week 2 (June 26th - June 28th, 2024) *

1st session: 9:45-12:45

Course 7: Feature Subset Selection (15 h)
      Introduction. Filter approaches. Embedded methods. Wrapper methods. Additional topics. Practical session: R and python.

Course 8: Clustering (15 h)
      Introduction to clustering. Data exploration and preparation. Prototype-based clustering. Density-based clustering. Graph-based clustering. Cluster evaluation. Miscellanea. Conclusions and final advice. Practical session: R.

2nd session: 13:45-16:45
Course 9: Gaussian Processes and Bayesian Optimization (15 h)
      Introduction to Gaussian processes. Sparse Gaussian processes. Deep Gaussian processes. Introduction to Bayesian optimization. Bayesian optimization in complex scenarios. Practical demonstration: python using GPytorch and BOTorch.
     
Course 10: Explainable Machine Learning (15 h)
      Introduction. Inherently interpretable models. Post-hoc interpretation of black box models. Basics of causal inference. Beyond tabular and i.i.d. data. Other topics. Practical demonstration: Python with Google Colab.
         
3rd session: 17:00-20:00
Course 11:  SVMs, Kernel Methods and Regularized Learning (15 h)
      Regularized learning. Kernel methods. SVM models. SVM learning algorithms. Practical session: Python Anaconda with scikit-learn.
     
Course 12: Hidden Markov Models (15 h)
      Introduction. Discrete Hidden Markov Models. Basic algorithms for Hidden Markov Models. Semicontinuous Hidden Markov Models. Continuous Hidden Markov Models. Unit selection and clustering. Speaker and Environment Adaptation for HMMs. Other applications of HMMs. Practical session: HTK.

Back  Top

3-3-5(2024-06-20) Colloque international Nouvelles perspectives d'analyse musicale de la voix,Université Lumière Lyon2 France,

                                            Colloque international

           « Nouvelles Perspectives d’analyse musicale de la voix »

              Université Lumière Lyon 2, Lyon, 20-21 juin 2024

                                    APPEL À COMMUNICATIONS

 

Thématiques suggérées (liste non-limitative) :

• Analyse structurelle de la voix chantée ou du parlé musicalisé.

• Techniques d'analyse harmonique et mélodique appliquées à la voix.

• Méthodes et techniques d’analyse de la voix.

• Nouvelles perspectives technologiques et computationnelles d'analyse de la voix.

• Approches stylistiques ou rhétoriques dans l'analyse de la voix.

• Exploration acoustique, physiologique et interdisciplinaire de techniques vocales spécifiques, d’effets interprétatifs ou de modalités variées d’utilisation de la voix.

• Étude du rythme, du timbre vocal, du phrasé, etc.

 

 

Modalités de soumission : Nous vous invitons à soume8re votre proposi=on de communica=on avant le 1ER FÉVRIER 2024. Les propositions, qui devront comporter un résumé (2500 signes maximum, en français ou en anglais) et une courte notice bio-bibliographique, seront à faire parvenir conjointement à Antoine Petit (antoine.petit@univ-lyon2.fr) et Céline Chabot-Canet (celine.chabot-canet@univ-lyon2.fr). Les réponses seront communiquées au plus tard le 8 février 2023. Ce colloque donnera lieu à une publication des actes. Comité scientifique : Céline Chabot-Canet, Muriel Joubert, Antoine Petit, Axel Roebel, Catherine Rudent. Comité d’organisation : Antoine Petit (doctorant), Céline Chabot-Canet (MCF), Passages Arts & Li8ératures (XX-XXI), Université Lumière Lyon 2. Dans le cadre du projet ANR « Analyse et tRansformation du Style de chant » (ANR-19-CE38-0001-03).

Back  Top

3-3-6(2024-07-01) CfAbstracts Workshop 'Prosodic features of language learners' fluency', Leiden, The Netherlands

Call for Abstracts for the workshop 'Prosodic features of language learners' fluency'

https://l2fluency.lst.uni-saarland.de/

 

This workshop is a satellite event of 'Speech Prosody' to be held in Leiden (The Netherlands) on 1st of July, 2024. Its aim is to bring together colleagues from two research communities to focus on speech fluency: spoken second/foreign language (L2) on the one hand and speech prosody on the other.

 

In the past, fluency was often ignored in speech prosody research (as reflected in the Handbook of Language Prosody (2022) and also in the Speech Prosody conferences). Moreover, fluency and timing are only rarely treated together with intonation-related aspects in L2 research. However, a broader ranging view on L2 sentence prosody would be beneficial to the construction of theories concerning the acquisition of L2 prosody and applications such as assessments in teaching, exercises for individual learning, assessments and automatic testing of spoken performances. Likewise, research of language learning does not seem to be very much integrated into speech prosody research. This concerns both theoretical and methodological aspects but also acquisition and annotation of learner data, e.g. in learner corpora.

 

Thus, the scope of the workshop includes topics like measuring fluency, assessment of fluency (human experts, non-experts, and machines), learner corpora and annotation of disfluencies, elements and combinations of disfluencies (e.g. filler particles, disfluent pauses, lengthenings, repetitions, repairs), varying degrees of fluency in different speech styles and tasks, fluency and L2 proficiency levels, intonational aspects of fluency, visual aspects of fluency (e.g. hand-arm gestures, eye-gazing, torso movement), teaching methods for fluency improvement in L2 speech production and perception.

 

Keynote speakers are Lieke van Maastricht (Radboud University Nijmegen) and Malte Belz (Humboldt University Berlin).

 

Interested colleagues are invited to submit a two-page abstract (first page for text, second page for illustrations, tables, and references) to be reviewed by an expert committee. Only oral presentations are planned. In addition to this workshop, we are discussing the possibility of editing a special (open) issue in a recognised journal (e.g. 'Journal of Second Language Pronunciation' or 'Studies in Second Language Acquisition') to which we would encourage presenters of workshop papers to contribute.

 

Important dates: abstract submission deadline: 8 April, notification of acceptance: 1 May, workshop day: 1 July 2024.

 

Organisers: Jürgen Trouvain, Bernd Möbius (both Saarland University) and Nivja de Jong (Leiden University)

 

Back  Top

3-3-7(2024-07-06) Speech Prosody Workshop -CROSSIN: Intonation at the Crossroads, Leiden, The Netherlands

Speech Prosody Workshop Announcement

 

CROSSIN: Intonation at the Crossroads

Speech Prosody Satellite Workshop, Leiden, Saturday 6 July 2024

 

WORKSHOP ANNOUNCEMENT AND CALL FOR POSTER PRESENTATIONS

Intonation is studied by different disciplines in which the research focus varies. One element these approaches have in common is that they must all address intonation meaning. This applies whether researchers are mostly interested in the phonological representation of intonation, its interaction with syntax, semantics, and pragmatics, or its role in communication and speech processing. These perspectives complement each other, yet it is often the case that research focusing on one does not give full consideration to the others: for instance, syntactic approaches to the role of intonation in expressing focus may overlook differences in phonological form in focus expression, while pragmatic approaches may assume that each meaning nuance is directly expressed by a different tune; conversely, studies on intonation phonetics and phonology do not always fully consider meaning. 

 

The aim of this workshop is to reach a more comprehensive view, by bringing together researchers working on intonation from different perspectives so they can enter into dialogue with and learn from each another. The main questions of the workshop are:

 

  1. What is the relationship between syntax, semantics, pragmatics, and intonation? Can we expect a one-to-one correspondence between intonation categories or tunes, on the one hand, and focus or other semantic or pragmatic functions, on the other?
  2. How can we best understand and model intonation meaning and intonation’s role in conversation and processing?

 

We invite abstracts addressing the questions above. The selected abstracts will be presented in a poster session. If there is sufficient interest, poster presentations will be published as a special issue or collection.

 

Keynote speakers: The workshop also includes invited talks by Stavros Skopeteas (Göttingen), Anja Arnhold (Alberta), and commentaries by James German (Aix-Marseille) and Claire Beyssade (Paris 8). The workshop will end with a general round-table discussion. For more information on the workshop, visit https://www.sprintproject.io/crossinworkshop  or http://tinyurl.com/y7zj8h5f .

 

Important dates: abstract submission deadline: 31 March; notification of acceptance: 30 April; workshop day: 6 July 2024

 

Abstract Guidelines

Abstracts should be written in English and should present original research not already submitted to Speech Prosody. The  text should not exceed one A4 page , though an additional page for references, examples, and figures may also be added. The following formatting conventions apply: Times New Roman font, size 12, 2.54 cm (1 inch) margins, single spacing. Submissions should be sent as anonymized pdf files to sprintonation@gmail.com by 31 March 2024 at 24:00 AoE. Please provide author details in your email.

 

Organizers: Amalia Arvaniti, Stella Gryllia, Jiseung Kim, Riccardo Orrico, Alanna Tibbs (Radboud University)

 

Back  Top

3-3-8(2024-07-08) 35ème Journées d’Études sur la Parole, Toulouse, France

Conférence JEP-TALN-2024

Du 8 au 12 juillet 2024

Toulouse, France

======================

 

Les équipes de recherche SAMoVA, MELODI et IRIS de l’Institut de Recherche en Informatique de Toulouse (IRIT, UMR 5505), l’équipe PLC du laboratoire Cognition, Langues, Langage, Ergonomie (CLLE, UMR 5263) et l’axe neurocognition langagière, linguistique et phonétique cliniques du laboratoire de NeuroPsychoLinguistique (LNPL, URI EA 4156) organisent conjointement à Toulouse les 35ème Journées d’Études sur la Parole (JEP), la 31ème Conférence sur le Traitement Automatique des Langues Naturelles (TALN) et la 26ème Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues, dénommée (RECITAL).

 

https://jep-taln2024.sciencesconf.org/

 

----------------------------------

 

Dates importantes (JEP-TALN-RECITAL) :

-   Soumission des articles : *** février 2024 (date définitive) ***

-   Notification aux auteurs : 25 avril 2024

-   Date de la conférence : 8 au 12 juillet 2024

- Proposition d atelier : ***22 février 2024 (date définitive) ***

 

 

Les thématiques de la conférence s’inscrivent dans les catégories suivantes, sans y être limitées pour autant.

 

TALN-RECITAL

-   Phonétique, phonologie, morphologie, étiquetage morphosyntaxique

-   Syntaxe, grammaires, analyse syntaxique, chunking

-   Sémantique, pragmatique, discours

-   Sémantique lexicale et distributionnelle

-   Aspects linguistiques et psycholinguistiques du TAL

-   Ressources pour le TAL

-   Méthodes d’évaluation pour le TAL

-   Applications du TAL (recherche et extraction d’information, question-réponse, traduction, génération, résumé, dialogue, analyse d’opinions, simplification, etc.)

-   TAL et multimodalité (parole, vision, etc.)

-   TAL et multilinguisme

-   TAL pour le Web et les réseaux sociaux

-   TAL et langues peu dotées

-   TAL et langue des signes

-   Implications sociales et éthiques du TAL

-   TAL et linguistique de corpus

-   TAL et Humanités numériques

 

JEP

-   Acoustique de la parole

-   Acquisition de la parole et du langage

-   Analyse, codage et compression de la parole

-   Applications à composantes orales (dialogue, indexation, etc)

-   Apprentissage d’une langue seconde

-   Communication multimodale

-   Dialectologie

-   Évaluation, corpus et ressources

-   Langues en danger

-   Modèles de langage

-   Parole audio-visuelle

-   Pathologies de la parole

-   Phonétique et phonologie

-   Phonétique clinique

-   Production / Perception de la parole

-   Prosodie

-   Psycholinguistique

-   Reconnaissance et compréhension de la parole

-   Reconnaissance de la langue

-   Reconnaissance du locuteur

-   Signaux sociaux, sociophonétique

-   Synthèse de la parole

 

Le nombre de pages des soumissions pour JEP/TALN/RECITAL est libre, mais compris entre 6 et 10 pages (selon le détail de l’appel et hors références/annexes). Le principe est que la taille de la soumission doit être cohérente avec son contenu. Les relecteurs jugeront un article sur sa qualité et cette adéquation.

Les feuilles de style et le détail des appels sont disponibles sur le site web de la conférence : https://jep-taln2024.sciencesconf.org/

 

Lien de soumission https://easychair.org/conferences/?conf=jeptaln2024

Back  Top

3-3-9(2024-07-08) Appel à ateliers JEPTALN 2024, Toulouse, France

Appel à ateliers de JEPTALN 2024

Conférence JEPTALN 2024

8 - 12 juillet 2024

Dans le cadre des conférences conjointes JEPTALN2024, nous sollicitons des propositions d'ateliers. Les ateliers doivent porter sur une thématique particulière de traitement automatique des langues ou de la parole afin de rassembler quelques exposés plus ciblés que lors des conférences plénières.

 

Chaque atelier a son propre président et son propre comité de programme. Le responsable de l'atelier est chargé de la communication sur celui-ci, de l'appel à soumissions et de la coordination de son comité de programme.

 

Les organisateurs de JEPTALN2024 s'occuperont de la logistique (e.g. gestion des salles, pauses café et diffusion des articles).

 

Les ateliers auront lieu en parallèle durant une journée ou une demi-journée (2 à 4 sessions de 1h30) le lundi 8 juillet 2024 sur le campus de l’Université Jean Jaurès de Toulouse.

 

Dates importantes

-   Date limite de soumission des propositions d'atelier : 15 février 2024

-   Réponse du comité de programme : 29 février 2024

 

Modalités de proposition

Les propositions d'ateliers (1 à 2 pages A4 en format PDF) comprendront :

-   le nom et l'acronyme de l’atelier

-   une description synthétique du thème de l'atelier

-   le comité d'organisation

-   le comité scientifique provisoire ou pressenti

-   l'adresse du site web

-   la durée souhaitée pour la réalisation de l'atelier (1 journée

  ou 1/2 journée) et l'audience potentielle

 

Les propositions d'ateliers devront être envoyées sous forme électronique à jose.moreno@irit.fr et julie.mauclair@irit.fr avec pour entête de courriel : [Atelier JEP TALN 2024].

 

Modalités de sélection

Les propositions d'atelier seront examinées par des membres des comités de programme de JEP, TALN, par l’AFCP et le CPERM de l'ATALA. Les critères suivants seront considérés pour acceptation :

-   l'adéquation aux thèmes de l'une ou l'autre des conférences

-   l'originalité de la proposition

 

Format

Les conférences auront lieu en français (ou en anglais pour les non-francophones). Les articles soumis devront suivre le format de JEPTALN 2024 (nombre de pages à la discrétion du comité de programme de l'atelier). La soumission des versions finales devra suivre le calendrier de la conférence principale.

Back  Top

3-3-10(2024-07-08) Atelier Parole Spontanée lors des JEP-TALN 2024
*Atelier Parole Spontanée lors des JEP-TALN 2024*
 
La parole spontanée e est un type de parole se caractérisant principalement par son caractère non préparé, bien que les définitions de ce type de parole ne soient pas à l'heure actuelle consensuelles. Elle se distingue par des spécificités contraignantes son analyse tant perceptive qu'automatique, notamment par la présence abondante d'éléments dis fluents, et d'une variabilité plus importante qu'en parole contrainte de l'articulation, de la prosodie ou des niveaux linguistiques . Les systèmes de traitement automatique de la libération conditionnelle sont confrontés à  cet enjeu majeur. En effet, les hésitations , les pauses remplies, les répétitions , les corrections, les faux dé parts, la grammaire et la syntaxe particulière de l'oral , le registre de langue, les phénomènes de réduction et les modes les prosodiques sont autant de défis à  relever pour améliorer la pré décision et la fiabilité  des systèmes de traitement automatique de la parole. Pour réfléchir à  ces enjeux, le pré envoyé atelier vise à  mobiliser les connaissances et les expérimentations des acteurs de ce domaine en abordant une perspective interdisciplinaire. Pour cela, nous proposons de regrouper les savoirs et retours d'expérience issus de domaines d'application variés ayant recours à  ce type de parole, comme par exemple la parole pathologique, la parole d'apprenants (L1 ou L2), la parole lors de ré syndicats ou encore les applications visant à inclure les personnes en situation de handicap.       
   
 
*Format et organisation de l'atelier*
 
Les organisateurs proposent un dé roulement en trois é bandes principales :
une pré sentation d'un ét à de l'art sur les études en parole spontanée e donnée e par les organisateurs et une spécialiste en science du langage : [durée e pré vue 30 minutes] une session de posters permettant aux participants de pré envoyer à  tour de rô les contributions scientifiques plus pré cises (soumis à acceptation d'un curriculum vitae)  [durée e pré vue 1h30] une session de discussions/conclusion [ durée et pré vue 40 minutes] 

 
 
 
 
 
*Calendrier*
- Soumission des curriculum vitae par mail : 6 mai 2024
- Notification d'acceptation : 13 mai 2024
- Atelier : 8 juillet 2024, pendant la conférence JEP-TALN 2024 à Toulouse
 
 
 
 
*Comité d'organisation*
- Mathieu Balaguer (IRIT-Université Toulouse 3)
- Julie Mauclair (IRIT-Université Toulouse 3)
- Solène Evain (Laboratoire d'Informatique de Grenoble, Université Grenoble Alpes)
- Adrien Pupier (Laboratoire d'Informatique de Grenoble, Université Grenoble Alpes)
- Nicolas Audibert ( Laboratoire de Phonétique et Phonologie, Université Sorbonne Nouvelle)
Back  Top

3-3-11(2024-07-16) CfP 7th Laughter and Other Non-Verbal Vocalisations Workshop - Belfast, UK

Call for Papers: 7th Laughter and Other Non-Verbal Vocalisations
> Workshop - July 16-17 2024
>
>
> We are excited to announce the 7th Laughter and Other Non-Verbal
> Vocalisations Workshop (bit.ly/LaughterWorkshop2024) on July 16-17 at
> Queen’s University Belfast. The workshop will be a pre-conference
> event, part of the 2024 Conference of the International Society for
> Research on Emotion (www.isre2024.org).
>
> Non-verbal vocalisations in human-human and human-machine interactions
> play important roles in displaying social and affective behaviours and
> in managing the flow of interaction. Laughter, sighs, clicks, filled
> pauses, and short utterances such as feedback responses are among some
> of the non-verbal vocalisations that are being increasingly studied
> from various research fields. However, much is still unknown about the
> phonetic or visual characteristics of non-verbal vocalisations
> (production/encoding), their relations to the social actions they are
> part of, their perceived meanings (perception/decoding), and their
> ordering in interaction. Furthermore, with the increased interest for
> more naturalness in human-machine interaction, current times also
> invite exploring how these phenomena can be integrated in speech
> applications.
>
> Research themesinclude, but are not restricted to, these aspects of
> laughter and other non-verbal vocalisations:
>
>  *
>
>    Articulation, acoustics, and perception
>
>  *
>
>    Interaction and pragmatics
>
>  *
>
>    Affective and evaluative meanings
>
>  *
>
>    Social perception and organisation
>
>  *
>
>    Disfluency
>
>  *
>
>    Technology applications
>
> Researchers are invited to submit extended abstracts(2 pages long,
> including figures and references) describing their work, including work
> in progress. The deadline for submission is March 15th, 2024. More
> information about the submission process can be found on our website
> (bit.ly/LaughterWorkshop2024).
>
> There will be twokeynote presentations on the topics treated by the
> workshop, delivered by Prof. Carolyn McGettigan (University College
> London, UK) and Prof. Margaret Zellers (Kiel University, Germany).
>
> Looking forward to receiving your contributions and welcoming you at
> the workshop in July!

Back  Top

3-3-12(2024-07-22) 13th International Conference on Voice Physiology and Biomechanics, Erlangen, Germany

13th International Conference

on Voice Physiology and Biomechanics

Erlangen, Germany 22nd-26th of July 2024

 

 

we cordially invite you to participate in the 13th International Conference on Voice Physiology and Biomechanics, July 22nd – 26th of 2024!

After the successful hosting in 2012, we are pleased to welcome you back in Erlangen, Germany! There will be two days of workshops prior to the three days of conference and several social events in the beautiful Nuremberg Metropolitan Region.

The workshops (July 22nd-23rd) and the conference (July 24th-26th) will focus on voice physiology and biomechanics including computational, numerical and experimental modelingmachine learningtissue engineeringlaryngeal pathologies and many more. Abstract submission and registration will be open from November 1st, 2023.

We are looking forward to your contributions and to seeing you in Erlangen, July 2024!

Back  Top

3-3-13(2024-07-29) 'Conversational Grounding in the Age of Large Language Models,' @ TheEuropean Summer School in Logic, Language, and Information (ESSLLI) 2024, Leuven, Belgium
We are excited to announce an upcoming workshop, 'Conversational Grounding in the Age of Large Language Models,' to be held as part of the European Summer School in Logic, Language, and Information (ESSLLI) 2024. This workshop is dedicated to exploring the intricate and often overlooked mechanism of Conversational Grounding within dialogue systems. It's a vital process through which dialogue participants create, exchange, and apply shared knowledge. This mechanism relies on the sophisticated interplay of multimodal signals, including visual and acoustic cues, combined with inferential reasoning and dynamic feedback, all essential for achieving mutual understanding. The workshop is open to researchers and practitioners - both senior scholars and graduate students - from a variety of disciplines, including linguistics, cognitive science, and computer science.

Details:

When: July 29th - August 2nd, 2024 (week one of ESSLLI)
Hosted by: the European Summer School in Logic, Language, and Information <https://2024.esslli.eu/>
Where: Leuven, Belgium

Participants will be chosen on the basis of a 2-page extended abstract. For more information on how to submit, as well as registration details, please visit the workshop website: https://articulab.hcii.cs.cmu.edu/conversational-grounding-in-the-age-of-large-language-models/
Back  Top

3-3-14(2024-08-07) The 7th IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2024) , San Jose, CA, USA

The 7th IEEE International Conference on
Multimedia Information Processing and Retrieval (MIPR 2024)

August 7 – 9, 2024 San Jose, CA, USA

http://www.ieee-mipr.org
https://sites.google.com/view/mipr2024

Joint conference collocation with the IEEE International Conference on
Information Reuse and Integration for Data Science (IRI) 2024

A vast amount of multimedia data is becoming accessible, making the
understanding of spatial and/or temporal phenomena crucial for many
applications. This necessitates the utilization of techniques in the
processing, analysis, search, mining, and management of multimedia
data. The 7th IEEE International Conference on Multimedia Information
Processing and Retrieval (IEEE-MIPR 2024) will take place in San Jose,
CA, USA on August 7–9, 2024, to provide a forum for original research
contributions and practical system design, implementation, and
applications of multimedia information processing and retrieval. The
target audiences include university researchers, scientists, industry
professionals, software engineers and graduate students. The event
includes a main conference as well as multiple associated keynote
speeches, workshops, challenge contests, tutorials, and panels.

Topics

Generative and Foundation Models in Multimedia
- AI-generated Media
- Foundation Models in Vision
- Security of Large AI Models
- Multimodal Media Detection
- Generation and Detection with Diffusion Models
- Media Generation with Large Language Models
- Visual and Vision-Language Pre-training
- Generic Vision Interface
- Alignments in Text-to-image Generation
- Large Multimodal Models
- Multimodal Agents

Trustworthy AI in Multimedia
- AI Reliability for Multimedia Applications and Systems
- AI Fairness for Multimedia Applications and Systems
- AI Robustness for Multimedia Applications and Systems
- Attack and Defense for Multimedia Applications and Systems

Video/Audio in Multimedia
- Speech/Voice Synthesis
- Analysis of Conversation
- Speaker and Language Identification
- Audio Signal Analysis
- Spoken Language Generation
- Automatic Speech Recognition
- Spoken Dialogue and Conversational AI Systems

Vision and Content Understanding
- Multimedia Telepresence and Virtual/Augmented/Mixed Reality
- Visual Concept Detection
- Object Detection and Tracking
- 3D Modeling, Reconstruction, and Interactive Applications
- Multimodal/Multisensor Interfaces, Integration, and Analysis
- Effective and Scalable Solution for Big Data Integration
- Affective and Perceptual Multimedia

Multimedia Retrieval
- Multimedia Search and Recommendation
- Web-Scale Retrieval
- Relevance Feedback, Active/Transfer Learning
- 3D and Sensor Data Retrieval
- Multimodal Media (Images, Videos, Texts, Graph/Relationship) Retrieval
- High-Level Semantic Multimedia Features

Machine/Deep Learning/Data Mining
- Deep Learning in Multimedia Data and Multimodal Fusion
- Deep Cross-Learning for Novel Features and Feature Selection
- High-Performance Deep Learning (Theories and Infrastructures)
- Spatio-Temporal Data Mining
- Novel Dataset for Learning and Multimedia

Multimedia Systems and Infrastructures
- Multimedia Systems and Middleware
- Software Infrastructure for Data Analytics
- Distributed Multimedia Systems and Cloud Computing

Networking in Multimedia
- Internet Scale System Design
- Information Coding for Content Delivery

Data Management
- Multimedia Data Collection, Modeling, Indexing, or Storage
- Data Integrity, Security, Protection, Privacy
- Standards and Policies for Data Management

Novel Applications
- Multimedia Applications for Health and Sports
- Multimedia Applications for Culture and Education
- Multimedia Applications for Fashion and Living
- Multimedia Applications for Security and Safety

Internet of Multimedia Things
- Real-Time Data Processing
- Autonomous Systems (Driverless Cars, Robots, Drones, etc.)
- Mobile and Wearable Multimedia

User Experience and Engagement
- Quality of Experience
- User Engagement
- Emotional and Social Signals

Paper Submission: The conference will accept regular papers (6 pages),
short papers (4 pages), and demo papers (4 pages), including
references. Authors are encouraged to compare their approaches,
qualitatively or quantitatively, with existing work and explain the
strengths and weaknesses of the new approaches. The CMT online
submission site is at https://cmt3.research.microsoft.com/MIPR2024.
All accepted papers presented in MIPR 2024 will be published in the
conference proceedings which will also be available online at the IEEE
Xplore digital library. 

Important Dates:
- Paper (regular/short/demo) submission: April 15, 2024 Pacific Time
- Paper review available: May 8, 2024
- Notification of acceptance: May 20, 2024
- Camera-ready deadline: June 17, 2024

Back  Top

3-3-15(2024-08-10) ASVspoof 5 challenge @Interspeech 2024, Kos Island, Greece

 

ASVspoof 5 challenge (Robust Speech Deepfake Detection and Automatic Speaker Verification) 

 

The registration for participation in

is now open: https://www.asvspoof.org/

The registration deadline is July 10th, 2024


The challenge features two tracks:

Track 1: Speech deepfake detection (DF) - 'real vs fake' speech detection

Track 2: Spoofing-robust automatic speaker verification (SASV)


ASVspoof is a community-driven, not-for-profit challenge series which promotes the development and benchmarking of generalizable speech deepfake detection and automatic speaker verification systems intended to operate reliably in the face of spoofing attacks. The challenge data is constructed using public speech resources, and organizers provide baseline systems and reference metrics. Compared to previous challenge editions, ASVspoof 5 involves a substantially larger amount of data, enabling participants to develop more sophisticated detection models. To promote robustness, as well as the development of solutions with practical applications in the wild, ASVspoof 5 focuses on non-studio-quality speech data.


How to participate?


1. Read the evaluation plan, available at www.asvspoof.org 

2. Join the e-mail list: send an e-mail to sympa@lists.asvspoof.org with 'subscribe ASVspoof5' as the subject line.

3. Register: https://shorturl.at/cqrtK 


Timeline:

- Training and development data available:      May 20, 2024

- Challenge leaderboard (Codalab) opens:        June 05, 2024 

- Evaluation data available:                  June 12, 2024 

- Challenge submissions due:                  July 17, 2024

- ASVspoof 5 workshop paper deadline:           July 31, 2024

- Acceptance notifications:                   August 10, 2024

- ASVspoof 5 workshop at Interspeech:         August 31, 2024


Please note that access to Codalab will be granted to registered participants only. Further details are available in the evaluation plan, which will be supplemented with additional details as the challenge progresses.


The ASVspoof 5 challenge organisers

info@asvspoof.org

Back  Top

3-3-16(2024-08-212) Summer School 'INTRODUCTION TO SPEECH AND MACHINE LEARNING', University of Eastern Finland, Joensuu, Finland

 

********************************************************
INTRODUCTION TO SPEECH AND MACHINE LEARNING
 
University of Eastern Finland (UEF) summer school
August 12—16, 2024
Joensuu, Finland
 
Registration (deadline: June 15, 2024)
*********************************************************
 
ORGANIZER
 
Computational Speech Group, School of Computing, UEF
Summer school chair: Tomi H. Kinnunen
 
CONFIRMED LECTURERS (in alphabetical order)
 
Rosa González Hautamäki, University of Oulu & UEF, Finland
Cemal Hanilci, Bursa Technical University, Turkey
Tomi H. Kinnunen, UEF, Finland
Sébastien Le Maguer, University of Helsinki, Finland
Jagabandhu Mishra, UEF, Finland
 
COURSE ASSISTANTS (alphabetic order)
 
Manasi Chhibber, UEF
Oðuzhan Kurnaz, Bursa Technical University, Turkey
Vishwanath Pratap Singh, UEF, Finland
 
COURSE OVERVIEW
 
University of Eastern Finland (UEF) hosts a number of different summer courses in August 2024. Introduction to Speech and Machine Learning is intended as a high-level introduction to machine learning techniques and their application to selected speech technology applications. The provisional course topics can be found on the course website at https://vpspeech.github.io/summerschool2024
 
The course includes lectures, quizzes (in Moodle), practicals, and a learning diary. While the basics of programming are necessary, we do not assume prior knowledge of speech or machine learning. The primary programming language used is Python (+ libraries and toolkits, including numpy, pyTorch). The practicals are carried out in the Google Colab environment. 
 
The course is taught in English and amounts to either 3 or 5 ECTS credits. The number of credits depends on whether or not the participant wishes to undertake 2 ECTS credits' worth of project work, which must submitted no later than 2 weeks after contact teaching ends. The course will be assessed as pass/fail. Students who pass the course will receive a course certificate.
 
SOCIAL PROGRAMME
 
The participants may participate in the social activities organized by the UEF. Please refer to https://www.uef.fi/en/uef-summer-school for updates.
 
MORE INFORMATION
 
Course-related matters:
Prof. Tomi H. Kinnunen, tomi.kinnunen@uef.fi 
 
General summer school matters (registration, social programme, etc)
Back  Top

3-3-17(2024-09-06) 4th SPSC Symposium with 3rd Voice Privacy Challenge Workshop ( Satellite event Interspeech)

4 ème  Symposium SPSC 

avec 

3 ème  Atelier Défi VoicePrivacy

Demande de papiers


La parole devient un moyen de plus en plus important pour l'interaction homme-machine avec de nombreux déploiements dans les domaines de la biométrie, de la médecine légale et, surtout, de l'accès à l'information via des assistants vocaux virtuels. Parallèlement à ces développements, le besoin d'algorithmes et d'applications robustes et sécurisés qui protègent la sécurité et la confidentialité de l'utilisateur est apparu à l'avant-garde de la recherche et du développement basés sur la parole.  


La quatrième édition du Symposium sur la sécurité et la confidentialité dans la communication vocale, combinée cette année au  VoicePrivacy Challenge , se concentre sur la parole et la voix à travers lesquelles nous nous exprimons. Étant donné que la communication vocale peut être utilisée pour commander à des assistants virtuels de transporter des émotions ou de s'identifier, le symposium tente de répondre à la question de savoir comment renforcer la sécurité et la confidentialité des types de représentation vocale dans une interaction homme/machine centrée sur l'utilisateur. Le symposium constate donc que les échanges interdisciplinaires sont très demandés et vise à rassembler des chercheurs et des praticiens de plusieurs disciplines, plus précisément : le traitement du signal, la cryptographie, la sécurité, l'interaction homme-machine, le droit et l'anthropologie.


L'initiative VoicePrivacy est à la tête des efforts visant à développer des solutions de préservation de la confidentialité pour la technologie vocale. Il vise à consolider la communauté nouvellement formée pour développer la tâche et les mesures et évaluer les progrès réalisés dans les solutions d'anonymisation à l'aide d'ensembles de données, de protocoles et de mesures communs. VoicePrivacy prend la forme d’un défi compétitif. Conformément aux éditions précédentes du VoicePrivacy Challenge, l'édition actuelle se concentre sur l'anonymisation de la voix. Les participants doivent développer des systèmes d'anonymisation pour supprimer l'identité du locuteur tout en gardant intacts le contenu et les attributs paralinguistiques. Cette édition se concentre sur la préservation de l’état émotionnel, qui constitue l’attribut paralinguistique clé dans de nombreuses applications réelles de l’anonymisation vocale. Tous les participants sont encouragés à soumettre au symposium SPSC des articles liés à leur participation au défi, ainsi que d'autres articles scientifiques liés à l'anonymisation des locuteurs et à la confidentialité de la voix. Plus de détails peuvent être trouvés sur la page Web du VoicePrivacy Challenge :  https://www.voiceprivacychallenge.org/


Afin de renforcer les efforts pour les deux événements, faciliter les discussions communes et étendre les échanges interdisciplinaires, nous avons décidé de regrouper nos équipes et d'organiser un événement commun. Pour le colloque général, nous acceptons les contributions sur des sujets connexes, ainsi que les rapports d'avancement, la diffusion de projets ou les discussions théoriques et les « travaux en cours ». En outre, les invités du monde universitaire, de l'industrie et des institutions publiques ainsi que les étudiants intéressés sont invités à assister à la conférence sans avoir à apporter leur propre contribution. Toutes les soumissions acceptées apparaîtront dans les actes du symposium publiés dans les archives ISCA.


SUJETS SPSC

Les perspectives techniques incluent (sans s’y limiter) :

Les sciences humaines et les perspectives sociales comprennent (sans s’y limiter) :

  • Communication vocale préservant la confidentialité

    • Reconnaissance et traitement de la parole

    • Perception, production et acquisition de la parole

    • Synthèse de discours 

    • Codage et amélioration de la parole

    • Identification du locuteur et de la langue

    • Phonétique, phonologie et prosodie

    • Paralinguistique 

  • La cyber-sécurité

    • Ingénierie de la confidentialité et calcul sécurisé

    • Sécurité des réseaux et robustesse face à la concurrence

    • Sécurité mobile

    • Cryptographie

    • Biométrie

  • Apprentissage automatique

    • Apprentissage fédéré

    • Des représentations démêlées

    • Confidentialité différentielle

    • Apprentissage distribué

  • Traitement du langage naturel

    • Le Web comme corpus et ressources 

    • Marquage, analyse et analyse de documents

    • Discours et pragmatique

    • Traduction automatique 

    • Théories linguistiques et psycholinguistique

    • Inférence de sémantique et extraction d'informations

  • Interfaces homme-machine (la parole comme support)

    • Sécurité et confidentialité utilisables

    • Informatique omniprésente

    • Informatique omniprésente et communication

    • Sciences cognitives

  • Éthique et droit

    • Confidentialité et protection des données

    • Médias et communication

    • Gestion des identités

    • Commerce électronique mobile

    • Données dans les médias numériques

  • Humanités numériques

    • Études d'acceptation et de confiance

    • Recherche sur l'expérience utilisateur sur la pratique

    • Co-développement interdisciplinaire

    • Citoyenneté des données

    • Études futures

    • Éthique située

    • Perspectives STS



Soumission:

Les articles destinés au symposium SPSC doivent contenir jusqu'à huit pages de texte. La durée doit être choisie de manière appropriée pour présenter le sujet à une communauté interdisciplinaire. Les soumissions d'articles doivent être conformes au format défini dans les directives de préparation des articles et tel que détaillé dans le  kit de l'auteur . Les articles doivent être soumis via le système de soumission d'articles en ligne via le lien sur le   site Web du SPSC . La langue de travail de la conférence est l'anglais et les articles doivent être rédigés en anglais. Tous les articles acceptés seront publiés dans les archives ISCA aux côtés des articles Interspeech et des ateliers ISCA associés.


Commentaires: 

Au moins trois examens en double aveugle seront effectués et nous visons à obtenir les commentaires d'experts interdisciplinaires pour chaque soumission. Pour les contributions au VoicePrivacy Challenge, l’examen se concentrera sur les descriptions et les résultats des systèmes. 


Rendez-vous importants:

Date limite de soumission des articles longs (jusqu'à 8 pages, hors références) 

15 juin 2024

Articles courts (jusqu'à 4 pages, références incluses)

date limite de soumission 

15 juin 2024

Date limite de soumission des articles au VoicePrivacy Challenge (4 à 6 pages hors références)

15 juin 2024

Résultats du VoicePrivacy Challenge et description du système

15 juin 2024

Notification de l'auteur (article de défi)

5 juillet 2024

Notification de l'auteur (longue et courte)

30 juillet 2024

Soumission finale du document (prêt à photographier)

15 août 2024

Symposium

6 septembre 2024


Lieu: 

Le lieu du Symposium sera publié prochainement, nous prévoyons de le faire co-localiser avec Interspeech 2024. Une participation hybride est possible


Back  Top

3-3-18(2024-09-06) VoicePrivacy 2024 Challenge, Kos Island, Greece

*******************************************

VoicePrivacy 2024 Challenge

http://www.voiceprivacychallenge.org

  • Paper and results submission deadline: 15th June 2024

  • Workshop (Kos Island, Greece in conjunction with INTERSPEECH 2024): 6th September 2024

*******************************************

Dear colleagues,

The challenge task is to develop a voice anonymization system for speech data which conceals the speaker’s voice identity while protecting linguistic content and emotional states.

Registration is still open. We have released 4 new baselines that offer greater privacy protection, and the final list of data and pretrained models allowed to build and train your own anonymization system.

Please find more information in the updated VoicePrivacy 2024 Challenge Evaluation Plan: https://www.voiceprivacychallenge.org/docs/VoicePrivacy_2024_Eval_Plan_v2.0.pdf

VoicePrivacy 2024 is the third edition, which will culminate in a joint workshop held in Kos Island, Greece in conjunction with INTERSPEECH 2024 and in cooperation with The Fourth ISCA Symposium on Security and Privacy in Speech Communication.

Registration:

Participants are requested to register for the evaluation. Registration should be performed once only for each participating entity using the following form: Registration. You will receive a confirmation email within ~24 hours after successful registration, otherwise or in case of any questions please contact the organizers: organisers@lists.voiceprivacychallenge.org

Subscription:

To stay up to date with VoicePrivacy, please join

VoicePrivacy - Google Groups and VoicePrivacy (@VoicePrivacy) on X.

Sponsor:

Nijta

----------- 

Best regards,

The VoicePrivacy 2024 Challenge Organizers,

Pierre Champion - Inria,

France Nicholas Evans - EURECOM,

France Sarina Meyer - University of Stuttgart, Germany

Xiaoxiao Miao - Singapore Institute of Technology, Singapore

Michele Panariello - EURECOM, France

Massimiliano Todisco - EURECOM, France

Natalia Tomashenko - Inria, France

Emmanuel Vincent - Inria, France

Xin Wang - NII, Japan

Junichi Yamagishi - NII, Japan





 

Back  Top

3-3-19(2024-09-09) Cf Labs Proposals @CLEF 2024, Grenoble, France

Call for Labs Proposals @CLEF 2024

At its 25th edition, the Conference and Labs of the Evaluation Forum (CLEF) is a continuation of the very successful series of evaluation campaigns of the Cross Language Evaluation Forum (CLEF) which ran between 2000 and 2009, and established a framework of systematic evaluation of information access systems, primarily through experimentation on shared tasks. As a leading annual international conference, CLEF uniquely combines evaluation laboratories and workshops with research presentations, panels, posters and demo sessions. In 2024, CLEF takes place in September,  9-12 at the University of Grenoble Alpes, France.

Researchers and practitioners from all areas of information access and related communities are invited to submit proposals for running evaluation labs as part of CLEF 2024. Proposals will be reviewed by a lab selection committee, composed of researchers with extensive experience in evaluating information retrieval and extraction systems. Organisers of selected proposals will be invited to include their lab in the CLEF 2024 labs programme, possibly subject to suggested modifications to their proposal to better suit the CLEF lab workflow or timeline.

Background

The CLEF Initiative (http://www.clef-initiative.eu/) is a self-organised body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual information in different modalities - including text and multimedia - with various levels of structure. CLEF promotes research and development by providing an infrastructure for:

  1. independent evaluation of information access systems;

  2. investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access; 

  3. creation of reusable test collections for benchmarking; 

  4. exploration of new evaluation methodologies and innovative ways of using experimental data; 

  5. discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

Scope of CLEF Labs

We invite submission of proposals for two types of labs:

  1. “Campaign-style” Evaluation Labs for specific information access problems (during the twelve months period preceding the conference), similar in nature to the traditional CLEF campaign “tracks”. Topics covered by campaign-style labs can be inspired by any information access-related domain or task.

  2. Labs that follow a more classical “workshop” pattern, exploring evaluation methodology, metrics, processes, etc. in information access and closely related fields, such as natural language processing, machine translation, and human-computer interaction.

We highly recommend organisers new to the CLEF format of shared task evaluation campaigns to first consider organising a lab workshop to discuss the format of their proposed task, the problem space and practicalities of the shared task. The CLEF 2024 programme will reserve about half of the conference schedule for lab sessions. During the conference, the lab organisers will present their overall results in overview presentations during the plenary scientific paper sessions to give non-participants insights into where the research frontiers are moving. During the conference, lab organisers are expected to organise separate sessions for their lab with ample time for general discussion and engagement with all participants - not just those presenting campaign results and papers. Organisers should plan time in their sessions for activities such as panels, demos, poster sessions, etc. as appropriate. CLEF is always interested in receiving and facilitating innovative lab proposals. 

Potential task proposers unsure of the suitability of their task proposal or its format for inclusion at CLEF are encouraged to contact the CLEF 2024 Lab Organizing Committee Chairs to discuss its suitability or design at an early stage.

Proposal Submission

Lab proposals must provide sufficient information to judge the relevance, timeliness, scientific quality, benefits for the research community, and the competence of the proposers to coordinate the lab. Each lab proposal should identify one or more organisers as responsible for ensuring the timely execution of the lab. Proposals should be 3 to 4 pages long and should provide the following information:

  1. Title of the proposed lab.
     

  2. A brief description of the lab topic and goals, its relevance to CLEF and the significance for the field.
     

  3. A brief and clear statement on usage scenarios and domain to which the activity is intended to contribute, including the evaluation setup and metrics.
     

  4. Details on the lab organiser(s), including identifying the task chair(s) responsible for ensuring the running of the task. This should include details of any previous involvement in organising or participating in evaluation tasks at CLEF or similar campaigns.
     

  5. The planned format of the lab, i.e., campaign-style (“track”) or workshop.
     

  6. Is the lab a continuation of an activity from previous year(s) or a new activity?  

  1. For activities continued from previous year(s): Statistics from previous years (number of participants/runs for each task), a clear statement on why another edition is needed, an explicit listing of the changes proposed, and a discussion of lessons to be learned or insights to be made.

  2. For new activities: A statement on why a new evaluation campaign is needed and how the community would benefit from the activity.
     

  1. Details of the expected target audience, i.e., who do you expect to participate in the task(s), and how do you propose to reach them.
     

  2. Brief details of tasks to be carried out in the lab. The proposal should clearly motivate the need for each of the proposed tasks and provide evidence of its capability of attracting enough participation. The dataset which will be adopted by the Lab needs to be described and motivated in the perspective of the goals of the Labs; also indications on how the dataset will be shared are useful. It is fine for a lab to have a single task, but labs often contain multiple closely related tasks, needing a strong motivation for more than 3 tasks, to avoid useless fragmentation.
     

  3. Expected length of the lab session at the conference: half-day, one day, two days. This should include high-level details of planned structure of the session, e.g. participant presentations, invited speaker(s), panels, etc., to justify the requested session length.
     

  4. Arrangements for the organisation of the lab campaign: who will be responsible for activities within the task; how will data be acquired or created, what tools or methods will be used, e.g., how will necessary queries be created or relevance assessment carried out; any other information which is relevant to the conduct of your lab.
     

  5. If the lab proposes to set up a steering committee to oversee and advise its activities, include names, addresses, and homepage links of people you propose to be involved.

Lab proposals must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Proposals” track.

Reviewing Process

Each submitted proposal will be reviewed by the CLEF 2024 Lab Organizing Committee. The acceptance decision will be sent by email to the responsible organiser by 28 July 2023. The final length of the lab session at the conference will be determined based on the overall organisation of the conference and the number of participant submissions received by a lab.

 

Advertising Labs at CLEF 2023 and ECIR 2024

Organisers of accepted labs are expected to advertise their labs at both CLEF 2023 (18-21 September 2023, Thessaloniki, Greece) and ECIR 2024 (24-28 March 2024, Glasgow, Scotland). So, at least one lab representative should attend these events.

Advertising at CLEF 2023 will consist of displaying a poster describing the new lab, running a break-out session to discuss the lab with prospective participants, and advertising/announcing it during the closing session.

Advertising at ECIR 2024 will consist of submitting a lab description to be included in ECIR 2024 proceedings (11 October 2023) and advertising the lab in a booster session during ECIR 2024.

Mentorship Program for Lab Proposals from newcomers

CLEF 2019 introduced a mentorship program to support the preparation of lab proposals for newcomers to CLEF. The program will be continued at CLEF 2024 and we encourage newcomers to refer to Friedberg et al. (2015) for initial guidance on preparing their proposal:

Friedberg I, Wass MN, Mooney SD, Radivojac P. Ten simple rules for a community computational challenge. PLoS Comput Biol. 2015 Apr 23;11(4):e1004150.

The CLEF newcomers mentoring program offers help, guidance, and feedback on the writing of your draft lab proposal by assigning a mentor to you, who help you in preparing and maturing the lab proposal for submission. If your lab proposal falls into the scope of an already existing CLEF lab, the mentor will help you to get in touch with those lab organisers and team up forces.

Lab proposals for mentorship must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Mentorship” track.

Important Dates

  • 29 May 2023: Requests for mentorship submission (only newcomers)

  • 29 May 2023 - 16 June 2023: Mentorship period

  • 7 July 2023: Lab proposals submission (newcomers and veterans)

  • 28 July 2023: Notification of lab acceptance

  • 18-21 Sep 2023: Advertising Accepted Labs at CLEF 2023, Thessaloniki, Greece

  • 11 October 2023: Submission of short lab description for ECIR 2024

  • 13 November 2023: Lab registration opens

  • 24-28 March 2024: Advertising labs at ECIR 2024, Glasgow, UK

CLEF 2024 Lab Chairs

  • Petra Galuscakova, University of Stavanger, Norway

  • Alba García Seco de Herrera, University of Essex, UK

CLEF 2024 Lab Mentorship Chair

  • Liana Ermakova, Université de Bretagne Occidentale, France

  • Florina Piroi, TU Wien, Austria

Back  Top

3-3-20(2024-09-09) The CLEF Cross Language Image Retrieval Track, Grenoble, France
** Call for Participation **
 
As part of the ImageCLEF2024 Lab - https://www.imageclef.org/ (The CLEF Cross Language Image Retrieval Track), which is a part of the 15th edition of CLEF 2024 (https://clef2024.imag.fr/), scheduled to take place from September 9 to 12, 2024, in Grenoble, we are pleased to introduce the first edition of the ToPicto task.
 
The goal of ToPicto is to bring together the scientific community (linguists, computer scientists, translators, etc.) to develop new translation methods to translate either speech or text into a corresponding sequence of pictograms.
 
We propose two distinct tasks:
- Text-to-Picto focuses on the automatic generation of a sequence of terms (each associated with an ARASAAC pictogram - https://arasaac.org/) from a French text. This challenge can be seen as a translation problem, where the source language is French, and the target language corresponds to the terms associated with each French pictogram.
- Speech-to-Picto aims to translate an audio segment into a sequence of terms, each associated with an ARASAAC pictogram. The challenge here lies in the absence of using textual data as input.
 
More information is available here: https://www.imageclef.org/2023/topicto
The training data has just been made public; it's your turn to engage!
 
To participate, follow the instructions provided here: https://www.imageclef.org/2024#registration.
 
Registrations for the tasks are now open:
- Text-to-Picto: https://ai4media-bench.aimultimedialab.ro/competitions/18/
- Speech-to-Picto: https://ai4media-bench.aimultimedialab.ro/competitions/19/
 
Important dates:
- 22.04.2024 registration closes for all ImageCLEF tasks
- 01.04.2024 Test data release starts
- 01.05.2024 Deadline for submitting the participants runs
- 13.05.2024 Release of the processed results by the task organizers
- 31.05.2024 Deadline for submission of working notes papers by the participants
- 21.06.2024 Notification of acceptance of the working notes papers
- 08.07.2024 Camera ready working notes papers
- 09-12.09.2024 CLEF 2024, Grenoble, France
Back  Top

3-3-21(2024-09-18) CfDemonstrations for the 21st International Conference on Content-based Multimedia Indexing (CBMI) , Reykjavík, Iceland

Call for Demonstrations for the 21st International Conference on Content-based Multimedia Indexing (CBMI)

September 18 – 20, 2024 in Reykjavík, Iceland

 

Conference website: https://cbmi2024.org/

 

(Apologies if you receive multiple copies of this call)

 

The 21st International Conference on Content-based Call for Demonstrations for the 21st International Conference on Content-based Multimedia Indexing (CBMI)

September 18 – 20, 2024 in Reykjavík, Iceland Multimedia Indexing (CBMI) welcomes the submission of demonstration papers. We invite authors to report on and showcase novel and compelling demonstrations (software, methods and experiences) in all topic areas relevant to CBMI

 

Submission Guidelines

The length of the papers should be up to 4 pages, in IEEE conference format, plus 1 page for references. One or two additional page(s) should be appended to illustrate what the demo involves and how it will be conducted on-site. This additional content will not be published in the conference proceedings, should the submission be accepted!  If possible, we also invite you to include a URL linking to a short video (max. 3 min) that shows the demonstration in action. 

 

Demonstration papers are subject to peer review in a single-blind process according to criteria such as novelty, interestingness, applications of or enhancements to state-of-the-art, and potential impact.

Submission Deadline

The extended submission deadline is May 6th, 2024 (AoIE). To submit your paper, follow the instructions in the submission guidelines

 

Infrastructure on Site

The conference will provide a table, power outlet, screen, wireless (shared) internet and a poster board. Presenters are expected to bring the necessary equipment (computers, etc.) themselves. If you have special needs (e.g., more space), please include a related note in the appendix of your submission (“Special Needs” section).

 

Should you have any questions regarding submissions, please contact the chairs at demo-chairs@cbmi2024.org


Back  Top

3-3-22(2024-09-18) CfP Special Session on 'Multimedia Indexing for eXtended Reality' at CBMI 2024, Reykjavik, Iceland

Call for Papers: Special Session on 'Multimedia Indexing for eXtended Reality' at CBMI 2024

https://cbmi2024.org/?page_id=100#MmIXR

21st International Conference on Content-based Multimedia Indexing (CBMI 2024).
18-20 September 2024, Reykjavik, Iceland - https://cbmi2024.org/

DESCRIPTION:
Extended Reality (XR) applications rely not only on computer vision for navigation and object placement but also require a range of multimodal methods to understand the scene or assign semantics to objects being captured and reconstructed. Multimedia indexing for XR thus encompasses methods for processes during XR authoring, such as indexing content to be used for scene and object reconstruction, as well as during the immersive experience, such as object detection and scene segmentation.
The intrinsic multimodality of XR applications involves new challenges like the analysis of egocentric data (video, depth, gaze, head/hand motion) and their interplay. XR is also applied in diverse domains, e.g., manufacturing, medicine, education, and entertainment, each with distinct requirements and data. Thus, multimedia indexing methods must be capable of adapting to the relevant semantics of the particular application domain.

TOPICS OF INTEREST:

  • Multimedia analysis for media mining, adaptation (to scene requirements), and description for use in XR experiences (including but not limited to AI-based approaches)

  • Processing of egocentric multimedia datasets and streams for XR (e.g., egocentric video and gaze analysis, active object detection, video diarization/summarization/captioning)

  • Cross- and multi-modal integration of XR modalities (video, depth, audio, gaze, hand/head movements, etc.)

  • Approaches for adapting multimedia analysis and indexing methods to new application domains (e.g., open-world/open-vocabulary recognition/detection/segmentation, few-shot learning)

  • Large-scale analysis and retrieval of 3D asset collections (e.g., objects, scenes, avatars, motion capture recordings)

  • Multimodal datasets for scene understanding for XR

  • Generative AI and foundation models for multimedia indexing and/or synthetic data generation

  • Combining synthetic and real data for improving scene understanding

  • Optimized multimedia content processing for real-time and low-latency XR applications

  • Privacy and security aspects and mitigations for XR multimedia content

     

IMPORTANT DATES:
Submission of papers: 22 March 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

SUBMISSION:
The session will be organized as an oral presentation session. The contributions to this session will be long papers describing novel methods or their adaptation to specific applications or short papers describing emerging work or open challenges.

SPECIAL SESSION ORGANISERS:
Fabio Carrara, Artificial Intelligence for Multimedia and Humanities Laboratory, ISTI-CNR, Pisa, Italy

Werner Bailer, Intelligent Vision Applications Group, JOANNEUM RESEARCH, Graz, Austria

Lyndon J. B. Nixon, MODUL Technology GmbH and Applied Data Science School at MODUL University, Vienna, Austria

Vasileios Mezaris, Information Technologies Institute / Centre for Research and Technology Hellas, Thessaloniki, Greece

Back  Top

3-3-23(2024-09-18) CfP Special Session on 'Multimodal Insights for Disaster Risk Management and Applications, (MIDRA)' at CBMI 2024, Reykjavik, Iceland

Call for Papers: Special Session on 'Multimodal Insights for Disaster Risk Management and Applications (MIDRA)' at CBMI 2024

https://cbmi2024.org/?page_id=100#MIDRA

21st International Conference on Content-based Multimedia Indexing (CBMI 2024).
18-20 September 2024, Reykjavik, Iceland - 
https://cbmi2024.org/

Disaster management in all its phases from preparedness, prevention, response, and recovery is in abundance of multimedia data, including valuable assets like satellite images, videos from UAVs or static cameras, and social media streams. The value of such multimedia data for operational purposes in disaster management is not only useful for civil protection agencies but also for the private sector that quantifies risk. Indexing data from crisis events presents Big Data challenges due to its variety, velocity, volume and veracity for effective analysis and retrieval.

The advent of deep learning and multimodal data fusion offers an unprecedented opportunity to overcome these challenges and fully unlock the potential of disaster event multimedia data. Through the strategic utilization of different data modalities, researchers can significantly enhance the value of these datasets, uncovering insights that were previously beyond reach, giving actionable information and supporting real-life decision-making procedures.

This special session actively seeks research papers in the domain of multimodal analytics and their applications in the context of crisis event monitoring through knowledge extraction and multimedia understanding. Emphasis is placed on recognizing the intrinsic value of spatial information when integrated with other data modalities.

The special session serves as a collaborative platform for communities focused on specific crisis events, such as forest fires, volcano unrest or eruption, earthquakes, floods, tsunamis and extreme weather events, which have increased significantly due to the climate crisis in our era. It fosters the exchange of ideas, methodologies, and software tailored to address challenges in these domains, aiming to encourage fruitful collaborations and the mutual enrichment of insights and expertise among diverse communities.

This special session includes presentation of novel research within the following domains:

  • Lifelog computing
  • Urban computing
  • Satellite computing and earth observation
  • Multimodal data fusion
  • Social media

Within these domains, the topics of interest include (but are not restricted to):

  • Multimodal analytics and retrieval techniques for crisis event multimedia data.
  • Deep learning and neural networks for interpretability, understanding, and explainability in artificial intelligence applied to natural disasters.
  • Satellite image analysis and fusion with in-situ data for crisis management.
  • Integration of multimodal data for comprehensive risk assessment.
  • Application of deep learning techniques to derive insights for risk mitigation.
  • Development of interpretative models for better understanding of risk factors.
  • Utilization of diverse data modalities (text, images, sensors) for risk management.
  • Implementation of multimodal analytics in predicting and managing natural disasters.
  • Application of multimodal insights in insurance risk assessment.
  • Enhanced decision-making through the fusion of geospatial and multimedia data.

Important Dates:
Submission of papers: 22 March 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

Organisers:

  • Maria Pegia, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.
  • Ilias Gialampoukidis, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.
  • Ioannis Papoutsis, National Observatory of Athens & National Technical University of Athens, Greece.
  • Krishna Chandramouli, Venaka Treleaf GbR, Germany.
  • Stefanos Vrochidis, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.

Please direct correspondence to midra@cbmi2024.org

Back  Top

3-3-24(2024-09-18) Special Session on 'Explainability in Multimedia Analysis' (ExMA)@ CBMI 2024, Reykjavik, Iceland

The 21st International Conference on Content-based Multimedia Indexing (CBMI 2024) will be held in Reykjavik, Iceland next September 18-20: https://cbmi2024.org/

The conference will bring together leading experts from academia and industry interested in the broad field of content-based multimedia indexing and applications.

The Special Session on 'Explainability in Multimedia Analysis' (ExMA), addresses the analysis of multimedia applications, such as person detection/tracking, face recognition or lifelog analysis, which may affect sensitive personal information. This raises both legal issues, e.g. concerning data protection and regulations in the ongoing European AI regulation, as well as ethical issues, related to potential bias in the system or misuse of these technologies. This special session focuses on AI-based explainability technologies in multimedia analysis.

The conference CBMI’2024 is supported by ACM SIGMM and the proceedings will be available at ACM Digital Library.

We would like to invite you to consider contributing a paper to this special session.

CBMI's important dates: https://cbmi2024.org/?page_id=211

Looking forward to see you at CBMI 2024.
With best regards,
Chiara Galdi

Special session organisers: Chiara Galdi, Martin Winter, Romain Giot, Romain Bourqui

Back  Top

3-3-25(2024-09-18) Special Session on' Content based Indexing for audio and music: from analysis to synthesis' @ CBMI 2024 , Reykjavik, Iceland.

The 21st International Conference on Content-based Multimedia Indexing (CBMI 2024) takes place September 18-20 in Reykjavik, Iceland.


We are delighted to have, as part of the conference, a Special Session on Audio entitled: Content based Indexing for audio and music: from analysis to synthesis 


Abstract: Audio has long been a key component of multimedia research. As far as indexing is concerned, the research and industrial context has changed drastically in the last 20 years or so. Today, applications of audio indexing range from karaoke applications to singing voice synthesis and creative audio design. This special session aims at bringing together researchers that aim at proposing new tools or paradigms to investigate audio and music processing in the context of indexation and corpus-based generation.


You are kindly encouraged to submit a paper related to the topic of the special session according to the CBMI guidelines : 

  • Regular full papers: 6 pages, plus additional pages for the list of references

  • Regular short papers: 4 pages, plus additional pages for the list of references


Important dates

  • March 22: Regular and special session paper submissions

  • June 3: Notification of acceptance 

  • Early July: Camera ready version of accepted papers



As of now, we already have 3 invited talks addressing the following topics : 

  • Cynthia C. S. Liem, Doğa Taşcılar, and Andrew M. Demetriou A quest through interconnected datasets: lessons from highly-cited ICASSP papers

  • Rémi Mignot, Geoffroy Peeters Learning invariance to sound modifications for music indexing and alignment

  • Cyrus Vahidi Large-scale music indexing for multimodal similarity search


Please join us in Reykjavik !!


Kindly yours,

François Pachet and Mathieu Lagrange

contact us: mathieu lagrange ls2n fr


Back  Top

3-3-26(2024-09-18)The 21st International Conference on Content-Based Multimedia Indexing — CBMI 2024, Reykjavik, Iceland

 

Last Call for Papers (with Final Deadline Extension) for the

21st International Conference on Content-Based Multimedia Indexing — CBMI 2024

September 18 – 20, 2024 in Reykjavik, Iceland

 

**** The CBMI 2024 submission deadline has been extended to April 12, 2024

**** The conference proceedings will be published by IEEE

 

After successful editions across Europe in France, Austria, Italy, UK, Czech Republic, and Hungary, the Content-Based Multimedia Indexing (CBMI) conference will take place in Reykjavík, Iceland this coming September 2024. CBMI aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualisation and analytics. We encourage contributions both on theoretical aspects and applications of CBMI in the new era of Artificial Intelligence.  Authors are invited to submit previously unpublished research papers highlighting significant contributions addressing these topics. In addition, special sessions on specific technical aspects or application domains are planned. 

 

Conference Website: http://cbmi2024.org/

 

The conference proceedings will be published by IEEE. Authors can submit full papers (6 pages + references), short papers (4 pages + references), special session papers (6 pages + references) and demonstration proposals (4 pages + 1 page demonstration description + references). Authors of high-quality papers accepted to the conference may be invited to submit extended versions of their contributions to a special journal issue in MTAP. Submissions to CBMI are peer reviewed in a single blind process. All types of papers must use the IEEE templates at https://www.ieee.org/conferences/publishing/templates.html. The language of the conference is English.

 

CBMI 2024 proposes eight special sessions:

  • AIMHDA: Advances in AI-Driven Medical and Health Data Analysis
  • Content-Based Indexing for Audio and Music: From Analysis to Synthesis
  • ExMA: Explainability in Multimedia Analysis
  • IVR4B: Interactive Video Retrieval for Beginners
  • MIDRA: Multimodal Insights for Disaster Risk Management and Applications
  • MmIXR: Multimedia Indexing for XR
  • Multimedia Analysis and Simulations for Digital Twins in the Construction Domain
  • Multimodal Data Analysis for Understanding of Human Behaviour, Emotions and their Reasons

 

Submission Deadlines

  • Full and short research papers are due April 12, 2024
  • Special session papers are due April 12, 2024
  • Demonstration submissions are due April 26, 2024

 

CBMI 2024 seeks contributions on the following research topics:

 

Multimedia Content Analysis and Indexing:

  • Media content analysis and mining
  • AI/ML approaches for content understanding
  • Multimodal and cross-modal indexing
  • Activity recognition and event-based multimedia indexing and retrieval 
  • Multimedia information retrieval (image, audio, video, text)
  • Conversational search and question-answering systems
  • Multimedia recommendation
  • Multimodal analytics, summarization, visualisation, organisation and browsing of multimedia content
  • Multimedia verification (e.g., multimodal fact-checking, deep fake analysis)
  • Large multimedia models, large language models and vision language models
  • Explainability in multimedia learning
  • Large scale multimedia database management
  • Evaluation and benchmarking of multimedia retrieval systems

 

Multimedia User Experiences:

  • Extended reality (AR/VR/MR) interfaces
  • Mobile interfaces
  • Presentation and visualisation tools
  • Affective adaptation and personalization
  • Relevance feedback and interactive learning

 

Applications of Multimedia Indexing and Retrieval:

  • Multimedia and sustainability
  • Healthcare and medical applications
  • Cultural heritage and entertainment applications
  • Educational and social applications
  • Egocentric, wearable and personal multimedia
  • Applications to forensics, surveillance and security
  • Environmental and urban multimedia applications
  • Earth observation and astrophysics

 

On behalf of the CBMI 2024 organisers,

Björn



—————— 
Björn Þór Jónsson (bjorn@ru.is)
Professor
Department of Computer Science
Reykjavik University (http://www.ru.is/)
Iceland

Back  Top

3-3-27(2024-09-20) 6th Int. Wkshop on the History of Speech Communication Research, Budapest, Hungary

Sixth International Workshop on the History of Speech Communication Research

September 20–21, 2024, Budapest

 

After highly popular sessions at ICPhS in Prague this year and an exceptional workshop „Lacerda 120” in Porto last year, we are happy to announce that the next HSCR workshop will take place in Budapest next year on Sept 20 and 21, organised by Judit Bóna and Mária Gósy of the Department of Applied Linguistics and Phonetics of ELTE University. The manuscript submission deadline is May 15, 2024. All details can be found at the workshop website: https://hscr2024.elte.hu/

The aim of this workshop is to bring scholars together who study the history of speech science to learn more on the methods, findings and results of our predecessors and to better understand the speech research community’s present achievements.

Speech has been investigated from different perspectives, which necessitates a range of approaches and scientific methods. Previous contributions analyzed the contextual background of individual researchers, investigated how specific research practices developed over time, examined the various kinds of approach of researchers to their material and the link between the form and the meaning in speech communication research.

The special focus of the 6th HSCR workshop will be on the development of the specific fields of speech communication, such as emerging phonology, progression in analysis of both speech sounds and prosody, speech technology, growing body of psycholinguistics, sociophonetics and clinical phonetics, etc. Researchers are encouraged to mine deep into history to find the early steps and advancement of these specific fields of speech communication. The knowledge of our predecessors is frequently unknown, forgotten or ignored for several reasons, and thus the past attainments are not appropriately integrated in our common consciousness regarding speech science.

As always, contributions on other topics from the history of speech communication research will also be welcome. The unfolded facts of the phonetic endeavor in the history of speech science may heavily inspire the present research.

Manuscripts should be sent to the email address of the workshop: hscr2024@gmail.com. Please, use the templates for your paper.

The proceedings will be published in the book series Studientexte zur Sprachkommunikation at TUDpress (Technical University Dresden). The HSCR proceedings will be published in print and also stored electronically in the ISCA archive.

For any inquiries, please use the workshop email address: hscr2024@gmail.com

Back  Top

3-3-28(2024-09-25) Second international multimodal communication symposium (MMSYM 2024), Goethe University, Frankfurt, Germany,

 

we are pleased to announce that the second international multimodal communication symposium (MMSYM 2024) will take place at Goethe University Frankfurt, Germany, on September 25 - 27, 2024!
Check the MMSYM website for more information and to stay up-to-date: http://mmsym.org
 
We are attaching the Call for Papers for MMSYM 2024 to this Email and invite you to submit abstracts of your multimodal work to the conference! MMSYM 2024 wants to emphasize the following three main research themes: (1) The gesture-speech integration, in particular the prosody-gesture link, (2) formal, automatic and machine-learning approaches to multimodality, and (3) psycholinguistic approaches in multimodal settings.
 
Abstracts can be submitted until March 8, 2024 via OpenReview. Please find more information about abstract submission, templates and guidelines on the MMSYM website.
 
Back  Top

3-3-29(2024-10-17) Colloque des Jeunes Chercheurs de Praxiling (UMR 5267), Montpellier, France

Nous organisons la 13e édition du Colloque des Jeunes Chercheurs de Praxiling (UMR 5267) qui aura lieu à Montpellier du 17 au 18 octobre 2024. La thématique du colloque est la suivante : « Vulnérabilité et langage : langues, locuteurs, discours ». 

Ce colloque s’adresse aux jeunes chercheurs s’intéressant à la thématique de la vulnérabilité sous des angles divers : langues vulnérables, locuteurs vulnérabilisés, discours sur la vulnérabilité émanant ou non des locuteurs en situation de vulnérabilité. 

Vous trouverez ci-joint l’appel à communication contenant l’argumentaire ainsi que toutes les conditions de rédaction et modalités de participation. La date de clôture de l’appel est fixée au 15 juin 2024 et la proposition est à envoyer à l’adresse suivante : cjc.praxiling.2024@gmail.com.

Vous retrouverez toutes les informations sur le site : https://cjc-praxiling2024.www.univ-montp3.fr

 

Bien cordialement, 

 

Le comité d’organisation : Lou BRUN, Myriam CASALONE, Elora DANJEAN, Ahamada KASSIME – Praxiling UMR 5267 Université Paul-Valéry Montpellier 3

Back  Top

3-3-30(2024-10-28) CfP 7th International Workshop on Multimedia Content Analysis in Sports (MMSports'24) @ ACM Multimedia, Melbourne, Australia

7th International Workshop on Multimedia Content Analysis in Sports (MMSports'24) @ ACM Multimedia, Oct 28 – Nov 1, 2024, Melbourne, Australia

 

We'd like to invite you to submit your paper proposals for the 7th International Workshop on Multimedia Content Analysis in Sports to be held in Melbourne, Australia together with ACM Multimedia 2024. The ambition of this workshop is to bring together researchers and practitioners from many different disciplines to share ideas and methods on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:

- annotation and indexing in sports 

- tracking people/ athlete and objects in sports

- activity recognition, classification, and evaluation in sports

- 3D scene and motion reconstruction in sports

- event detection and indexing in sports

- performance assessment in sports

- injury analysis and prevention in sports

- data driven analysis in sports

- graphical augmentation and visualization in sports

- automated training assistance in sports

- camera pose and motion tracking in sports

- brave new ideas / extraordinary multimodal solutions in sports

- personal virtual (home) trainers/coaches in sports

- datasets in sports

- graphical effects in sports

- alternative sensing in sports (beyond the visible spectrum)

- multimodal perception in sports

- exploiting physical knowledge in learning systems for sports

- sports knowledge discovery

- narrative generation and narrative analysis in sports

- mobile sports application

- multimedia in sports beyond video, including 3D data and sensor data

 

Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of their paper. All papers will undergo the same review process and review period.

 

Please refer to the workshop website for further information: 

http://mmsports.multimedia-computing.de/mmsports2024/index.html

 

IMPORTANT DATES

Submission Due:                           19 July 2024 

Acceptance Notification:             5 August 2024

Camera Ready Submission:         19 August 2024 

Workshop Date:                            TBA; either Oct 28 or Nov 1, 2024

 

ACM MMSports’24 Chairs: Thomas Moeslund, Rainer Lienhart and Hideo Saito

 

Back  Top

3-3-31(2024-11-04)) Cf Wkshps, Special sessions and Grand Challenge @ICMI, Costa Rica
We are delighted to inform you that ICMI 2024 will be hosted in Latin America, specifically Costa Rica. The International Conference on Multimodal Interaction (ICMI) is the premier global platform for multidisciplinary research about multimodal human-human and human-computer interaction, interfaces, and system development. We extend an invitation to teams for the submission of proposals for the following components: 
 
- Workshops, deadline February 5th. 2024.
- Special Sessions, deadline February 2nd. 2024.
- Grand Challenge, deadline February 5th. 2024.
 
Workshops
=========
ICMI has established a tradition of hosting workshops concurrently with the main conference to facilitate discourse on new research, technologies, social science models, and applications. Recent workshops include themes like Media Analytics for Societal Trends, International Workshop on Automated Assessment of Pain (AAP), Face and Gesture Analysis for Health Informatics, Generation and Evaluation of Non-verbal Behaviour for Embodied Agents, Bridging Social Sciences and AI for Understanding Child Behavior, and more.
 
Interested parties are invited to submit a 3-page workshop proposal for evaluation. Workshops may span half or a full day, with accepted papers indexed by ACM Digital Library in an adjunct proceeding and a brief workshop summary published in the main conference proceedings. The Workshop submission deadline is February 5th, 2024. Proposals should be emailed to the workshop chairs Naveen Kumar and Hendrik Buschmeier to icmi2024-workshop-chairs@acm.org. For additional details, please visit the conference website: https://icmi.acm.org/2024/call-for-workshops/ 
 
 
Special Sessions
================
Special Sessions are vital in exploring emerging topics within multimodal interaction, contributing significantly to this year's conference program. We invite proposals to enrich the conference's diversity and provide valuable insights into the overarching theme, 'Equitability and Environmental Sustainability in Multimodal Interaction Technologies.' Interested teams are requested to submit the following:
 
- Title of the special session: the title is designed to appeal to the ICMI community and be self-explanatory.
- Aims and scope, elucidating why the ICMI community should engage with this session.
- Tentative Speakers, comprising a list of potential contributing authors with provisional presentation titles. Special sessions typically include 4 to 6 peer-reviewed papers.
- Organizers and Bios are emphasizing the relevance and experience of the speakers.
 
The deadline for Special Sessions submissions is February 2nd, 2024. Prospective organizers are encouraged to submit proposals via icmi2024-specialsession-chairs@acm.org. Further details can be found on the conference website: https://icmi.acm.org/2024/special-sessions/
 
 
Grand Challenge
=============== 
The ICMI community is keen on identifying optimal algorithms and their failure modes, which are crucial for developing systems capable of reliably interpreting human-human communication or responding to human input. We invite the ICMI community to define and address scientific Grand Challenges in our field, offering perspectives over the next five years as a collective. The ICMI Multimodal Grand Challenges aim to inspire innovative ideas and foster future collaborative endeavors in tasks such as analysis, synthesis, and interaction.
 
To participate, submit a 5-page proposal for expert evaluation, considering originality, ambition, feasibility, and implementation plans. Accepted proposals will be published in the conference's main proceedings. The Grand Challenge submission deadline is February 5th, 2024. Proposals should be emailed to both ICMI 2024 Multimodal Grand Challenge Chairs, Dr. Ronald Böck (Genie Enterprise) and Dr. Dinesh Babu JAYAGOPI (IIIT Bangalore), using icmi2024-challenge-chairs@acm.org  Additional information is on the conference website: https://icmi.acm.org/2024/call-for-grand-challenge/ 
 
We look forward to your valuable contributions and participation in ICMI 2024.
 
On behalf of the Organizers of ICMI 2024!
Back  Top

3-3-32(2024-11-05) The 26th International Conference on Multimodal Interaction (ICMI 2024), San Jose, Costa Rica
We cordially invite you to submit papers for the main track of the 26th International Conference on Multimodal Interaction (ICMI 2024). The 26th International Conference on Multimodal Interaction (ICMI 2024) will be held in San José, Costa Rica. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data, and systems. The study of social interactions encompasses both human-human interactions and human-computer interactions.  A unique aspect of ICMI is its multidisciplinary nature which values both scientific discoveries and technical modeling achievements, with an eye towards impactful applications for the good of people and society. 
 

 

https://icmi.acm.org/2024/call-for-papers/

https://new.precisionconference.com/submissions/icmi24a


 


Important Dates
Abstract deadline
April 26th, 2024
Paper Submission
May 3rd, 2024
Rebuttal Period June 16th-23rd, 2024
Paper notification July 18th, 2024
Camera-ready paper August 16th, 2024
Presenting at main conference November 5th-7th, 2024
 

Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2024 will need to be novel along one of the two dimensions:

  • Scientific Novelty: Papers should bring new scientific knowledge about human social interactions, including human-computer interactions. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
  • Technical Novelty: Papers should propose novelty in their computational approach for recognizing, generating or modeling multimodal data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach.

Commitment to ethical conduct is required and submissions must adhere to ethical standards in particular when human-derived data are employed. Authors are encouraged to read the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).

 
Theme
 

The theme of this year’s ICMI conference revolves around “Equitability and environmental sustainability in multimodal interaction technologies.” The focus is on exploring how multimodal systems and multimodal interactive applications can serve as tools to bridge the digital divide, particularly in underserved communities and countries, with a specific emphasis on those in Latin America and the Caribbean. The conference aims to delve into the design principles that can render multimodal systems more equitable and sustainable in applications such as health and education, thereby catalyzing positive transformations in development for historically marginalized groups, including racial/ethnic minorities and indigenous peoples. Moreover, there is a crucial exploration of the intersection between multimodal interaction technologies and environmental sustainability. This involves examining how these technologies can be crafted to comprehend, disseminate, and mitigate the adverse impacts of climate change, especially in the Latin America and Caribbean region. The conference endeavors to explore the potential of multimodal systems in fostering community resilience, raising awareness, and facilitating education related to climate change, thereby contributing to a holistic approach that encompasses both social and environmental dimensions.


Additional topics of interest include but are not limited to:

  • Affective computing and interaction
  • Cognitive modeling and multimodal interaction
  • Gesture, touch and haptics
  • Healthcare, assistive technologies
  • Human communication dynamics
  • Human-robot/agent multimodal interaction
  • Human-centered A.I. and ethics
  • Interaction with smart environment
  • Machine learning for multimodal interaction
  • Mobile multimodal systems
  • Multimodal behaviour generation
  • Multimodal datasets and validation
  • Multimodal dialogue modeling
  • Multimodal fusion and representation
  • Multimodal interactive applications
  • Novel multimodal datasets
  • Speech behaviours in social interaction
  • System components and multimodal platforms
  • Visual behaviours in social interaction
  • Virtual/augmented reality and multimodal interaction
Back  Top

3-3-33(2024-11-25) 26th International Conference on Speech and Computer (SPECOM-2024), Belgrade, Serbia

*******************************************************

SPECOM-2024 – FIRST CALL FOR PAPERS

*******************************************************

 

26th International Conference on Speech and Computer (SPECOM-2024)

November 25-28, 2024

Crowne Plaza hotel, Belgrade, Serbia

Web: https://specom2024.ftn.uns.ac.rs/

 

ORGANIZERS

The conference SPECOM-2024 is organized by the Faculty of Technical Sciences University of Novi Sad and the School of Electrical Engineering University of Belgrade in cooperation with the Telecommunications Society of Serbia

 

FOUNDERS

SPECOM series was founded by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS) of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS)

 

CONFERENCE TOPICS

SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, human-computer interaction:

  • Affective computing

  • Audio-visual speech processing

  • Corpus linguistics

  • Computational paralinguistics

  • Deep learning for audio processing

  • Feature extraction

  • Forensic speech investigations

  • Human-machine interaction

  • Language identification

  • Large language models

  • Multichannel signal processing

  • Multilingual speech technology

  • Multimedia processing

  • Multimodal analysis and synthesis

  • Natural language generation

  • Natural language understanding

  • Sign language processing

  • Speaker diarization

  • Speaker identification and verification

  • Speech and language resources

  • Speech analytics and audio mining

  • Speech and voice disorders

  • Speech-based applications

  • Speech driving systems in robotics

  • Speech enhancement

  • Speech perception

  • Speech recognition and understanding

  • Speech synthesis

  • Speech translation systems

  • Spoken dialogue systems

  • Spoken language processing

  • Text mining and sentiment analysis

  • Virtual and augmented reality

  • Voice assistants

 

SATELLITE EVENTS

26th International Conference SPECOM will be organized together with the 32nd Telecommunications Forum TELFOR-2024: https://www.telfor.rs/en/

 

OFFICIAL LANGUAGE

The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.

 

FORMAT OF THE CONFERENCE

The conference program will include presentation of invited talks, oral presentations, and poster/demonstration sessions.

 

SUBMISSION OF PAPERS

Authors are invited to submit full papers of 10-15 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are invited to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2024

 

DEADLINES

July 01, 2024 ....................... Submission of full papers

September 03, 2024 ........... Notification of acceptance/rejection

September 15, 2024 ........... Camera-ready papers

October 01, 2024 ................ Early registration

 

PROCEEDINGS

SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.

 

GENERAL CHAIRS

Vlado DELIĆ – Faculty of Technical Sciences University of Novi Sad, Novi Sad, Serbia

Alexey KARPOV – SPIIRAS, SPC RAS, St. Petersburg, Russia

 

CONTACTS

All correspondence regarding the conference should be addressed to SPECOM-2024 Secretariat

E-mail: specom2024@uns.ac.rs

Web: https://specom2024.ftn.uns.ac.rs

Back  Top

3-3-34(2024-11-26) The 2nd International Conference on Foundation and Large Language Models (FLLM2024), Dubai, UAE

The 2nd International Conference on Foundation and Large Language Models (FLLM2024)

 

Hybrid Event

https://fllm2024.fllm-conference.org/index.php

26-29 November, 2024 | Dubai, UAE

Technically Co-Sponsored by IEEE UAE Section

FLLM 2024 CFP:

With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.

The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:

  •     Architectures and Systems
    • Transformers and Attention
    • Bidirectional Encoding
    • Autoregressive Models
    • Massive GPU Systems
    • Prompt Engineering
    • Multimodal LLMs
    • Fine-tuning
  •     Challenges
    • Hallucination
    • Cost of Creation and Training
    • Energy and Sustainability Issues
    • Integration
    • Safety and Trustworthiness
    • Interpretability
    • Fairness
    • Social Impact
  •     Future Directions
    • Generative AI
    • Explainability and EXplainable AI
    • Retrieval Augmented Generation (RAG)
    • Federated Learning for FLLM
    • Large Language Models Fine-Tuning on Graphs
    • Data Augmentation
  •     Natural Language Processing Applications
    • Generation
    • Summarization
    • Rewrite
    • Search
    • Question Answering
    • Language Comprehension and Complex Reasoning
    • Clustering and Classification
  •     Applications
    • Natural Language Processing
    • Communication Systems
    • Security and Privacy
    • Image Processing and Computer Vision
    • Life Sciences
    • Financial Systems

Submissions Guidelines and Proceedings

Manuscripts should be prepared in 10-point font using the IEEE 8.5' x 11' two-column format. All papers should be in PDF format, and submitted electronically at Paper Submission Link. A full paper can be up to 8 pages (including all figures, tables and references). Submitted papers must present original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines may be rejected without review. Also submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. Authors may contact the Program Chair for further information or clarification. All submissions are peer-reviewed by at least three reviewers. Accepted papers will appear in the FLLM Proceeding, and be published by the IEEE Computer Society Conference Publishing Services and be submitted to IEEE Xplore for inclusion. Submitted papers must include original work, and must not be under consideration for another conference or journal. Submission of regular papers up to 8 pages and must follow the IEEE paper format. Please include up to 7 keywords, complete postal and email address, and fax and phone numbers of the corresponding author. Authors of accepted papers are expected to present their work at the conference. Submitted papers that are deemed of good quality but that could not be accepted as regular papers will be accepted as short papers.

Important Dates:

  • Paper submission deadline: June 30, 2024
  • Notification of acceptance: September 15, 2024
  • Camera-ready Submission: October 10, 2024

 

Contact:

Please send any inquiry on FLLM to: info@fllm-conference.org

 

 

Back  Top

3-3-35(2024-xx-xx) Fearless Steps APOLLO Workshop.

We are pleased to extend an invitation to you to participate in the upcoming Fearless Steps APOLLO Workshop. Our workshop delves into exploring speech communication, technology, and the extensive audio of the historic NASA Apollo program.

 

The Fearless Steps APOLLO Community Resource, supported by NSF, is a unique and massive naturalistic communications resource. This resource, derived from the Apollo missions, offers a rare glimpse into team-based problem-solving in high-stakes environments, with a rich variety of speech and language data providing invaluable data for researchers, scientists, historians, and technologists.

 

The Fearless Steps APOLLO corpus contains 30 time-synchronized channels, which capture all NASA Apollo team communications. The PAO (Public Affairs Officer) channel reflects all live public broadcast TV/radio contexts streamed by NASA during the missions. This channel is similar to all Broadcast news corpora.

 

Our workshop aims to showcase featured speakers, panel discussions, and present the latest findings in speech and language processing. We will explore facets of the Fearless Steps APOLLO corpus, the largest publicly available naturalistic team-based historical audio and meta-data resource.

 

 

Topics Covered:

 

We will be exploring several key areas, including:

 

1. Big Data Recovery and Deployment in the Fearless Steps APOLLO initiative.

2. Applications in Education, History, and Archival efforts.

3. Insights into Communication Science and Psychology, particularly in Group Dynamics and Team Cohesion.

4. Speech and Language Technology (SLT) development, including ASR, SAD, speaker recognition, and conversational topic detection. 

 

Workshop Structure:

 

1. Discuss advancements in digitizing Apollo audio and machine learning solutions for audio diarization.

2. Explore team communication dynamics through speech processing.

3. Explore the utility of Fearless Steps APOLLO resource for: SpchTech (Speech & Language Technology), CommSciPsychTeam (Communication Sciences & Team-based Psychology), & EducArchHist (Education, History, & Archival) communities.

4. The FEARLESS STEPS Challenge, a community engagement and data generation initiative.

The workshop will feature oral talks, including an overview of Fearless Steps APOLLO resource, including Team presentations on systems evaluated for the Fearless Steps Challenge dataset.

 

 

Instructions for Authors:

 

We invite authors to submit a short 1-page research overview that involves the Fearless Steps APOLLO resource. Please submit your Abstracts through our dedicated portal.

The workshop format will include oral presentations for accepted abstracts, which will be announced after the paper submission. Submissions in the form of 1-page abstracts ( and an optional additional page for references, figures, or preliminary results) are encouraged. Detailed formatting instructions and sample PDFs are available on our website. The Complete Fearless Steps Challenge (Phase-1 to Phase-4) Corpora & Naturalistic (Apollo-11 & Apollo-13) corpora can be accessed by filling out a short survey form here: FS-APOLLO Corpora Download Access

 

 

The deadline for workshop Abstract submission is set for March 1, 2024. We will announce the acceptance of the Abstracts on March 15, 2024. Both in-person and remote participation options will be available, with a focus on fostering a collaborative environment. Papers accepted to ICASSP 2024 are welcome as Abstract submissions, as well as original research following our format guidelines.

 

We believe this workshop will be a pivotal step in advancing speech technology and research. We look forward to your participation in enriching the potential of the Apollo Resource and inspiring new approaches in collaborative problem-solving.

 

For more details, please visit our workshop website.

Back  Top

3-3-36(2025-04-06) Call for ICASSP 2025 Grand Challenge, Hyberabad, India
 

Call for ICASSP 2025 SP Grand Challenges! 

Submit your Proposal by 8 July 2024.

The 2025 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) invites proposals for the Signal Processing Grand Challenge Program (SPGC)! The 50th ICASSP will be held in Hyderabad, India, from 6-11 April 2025 

 

ICASSP 2025 will feature high technical quality, many novel scientific activities, excellent networking opportunities, enjoyable social events, and unforgettable touristic possibilities. This year's conference theme will be “Celebrating Signal Processing.”

 

Submit your SP Grand Challenge proposals by 8 July 2024 to be considered. Learn more about the submission requirements and guidelines below.

 

Signal Processing Grand Challenge Guidelines 

Proposal

Prospective SPGC organizers should include the following items in their proposal (please limit it to 4 pages):

  • One-page call for participation
  • Signal Processing Grand Challenge description
  • Description of the dataset provided for training and evaluation, evaluation criteria and methodology, guidelines for participants, and the full challenge schedule including the submission deadline
  • List of potential participants (indicate confirmed participants if applicable)

All SPGC proposals should be submitted online here.

 

Important Dates 

  • Proposal Submission Deadline: 8 July 2024
  • Proposal Acceptance Notification: 19 July 2024
  • 2-Page Papers Due (Invitation Only): 9 December 2024
  • 2-Page Paper Acceptance Notification: 30 December 2024
  • Camera Ready Papers Due: 13 January 2025
  • OJ-SP Papers Due (Invitation Only): 11 June 2025
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA