ISCA - International Speech
Communication Association


ISCApad Archive  »  2024  »  ISCApad #310  »  Events  »  Other Events

ISCApad #310

Tuesday, April 09, 2024 by Chris Wellekens

3-3 Other Events
3-3-1(2024-04-14) Cf Tutorials, ICASSP 2024, Seoul, Korea

ICASSP 2024 Call for Tutorials

Submit your Proposals by 6 September 2023

The International Conference on Acoustics, Speech, & Signal Processing (ICASSP) invites proposals for Tutorials. The 49th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held in Seoul, Korea, from April 14 to April 19, 2024, at COEX.

 

Tutorial proposals in all areas of signal processing and its applications, as listed in the conference topics, are warmly invited, and encouraged, especially those related to the theme of the conference and to new and emerging topics.

 

ICASSP 2024 will be an in-person conference; so, for each accepted tutorial, its proposer(s) will have to present it in person in Seoul.

 

Please submit your proposals by 6 September 2023. Learn more about the ICASSP 2024 conference topics and the Tutorials submission guidelines here

Call for Tutorial Proposals

Guidelines 

Tutorials will have a duration of 3 hours, including a 20-minute break, and will take place before the main technical program. For each accepted tutorial, its proposer(s) will have to present it in-person in Seoul.

 

Tutorial proposals should include the following essential information:
  • Title of the tutorial.
  • Presenter name(s), contact information, short biography (maximum of 1000 characters), and five recent related publications.
  •  A summary of presenter’s previous tutorial delivery experience.
  • The rationale for the tutorial including: importance, timeliness, novelty, how it can introduce new ideas/topics/tools to the SP community. 
  • A detailed description of the tutorial outlining the topics and subtopics covered.
  • A statement of any previous or related versions of this tutorial.

Please read carefully the guidelines outlined next before submitting your tutorial proposal via this submission link.

 

 Important Dates
  • Proposal Submission Deadline: 6 September 2023 
  • Acceptance Notification: 18 October 2023
     

ICASSP 2024 is a flagship event of the IEEE Signal Processing Society, a global network of signal processing and data science professionals. A membership to IEEE SPS connects you with more than 18,000 researchers, academics, industry practitioners, and students advancing and disseminating the latest breakthroughs and technology. By joining, you’ll receive significant savings on registration to future events, including ICASSP 2024, as well as access to highly-ranked journals, continuing education materials, and a robust technical community. Learn more about how you can save and grow with us! 

 
22-TA-1-009_SPS_75Years_TM_Lockup_Color_RGB
eabbc00a-6177-4d3d-892c-d36bb445eaa4
 
Back  Top

3-3-2(2024-04-14) CfP ICASSP 2024, Seoul, Korea

Announcing the ICASSP 2024 Call for Papers! 

Submit your Papers by 6 September 2023.

The Call for Papers for ICASSP 2024 is now open! The 49th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held in Seoul, Korea, from April 14 to April 19, 2024, at COEX.

 

ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing and its applications. It offers a comprehensive technical program presenting all the latest development in research and technology in the industry that attracts thousands of professionals annually. We hope you will engage in various sessions filled with valuable lectures, cutting-edge topic keynotes with world-renowned speakers, along with great opportunities to network with industry pioneers and leading researchers.

 

Please submit your papers by 6 September 2023. Learn more about the ICASSP 2024 Call for Papers and submission guidelines here

Submit a Conference Paper

Authors are invited to submit papers that are up to four pages for technical content including figures and references, and one optional fifth page containing only references. The submission website will be available soon. 

 

SP Society Journal Paper Presentations

Authors of papers published or accepted in IEEE SPS journals may present their work at ICASSP 2024 at appropriate tracks. These papers will neither be reviewed nor included in the proceedings. In addition, the IEEE Open Journal of Signal Processing (OJSP) will provide a special track for longer submissions with the same processing timeline as ICASSP. Accepted papers will be published in OJSP and presented in the conference but will not be included in the conference proceedings.

 

IEEE Open Journal of Signal Processing (OJSP) Submission Track

Following the same timeline as the conference papers, authors have the option to submit their paper for publication with the Open Journal of Signal Processing instead of in the conference proceedings.

 

IEEE OJSP has introduced a Short Papers submission category and review track, with a limit of eight pages plus an additional page for references (8+1). This is intended as an alternative publication venue for authors who would like to present at ICASSP 2024, but who prefer Open Access or the longer paper format than the traditional ICASSP 4+1 format.

 

Open Preview

Conference proceedings will be available in IEEE Xplore, free of charge, to all registered attendees/authors, 30 days prior to the conference start date, through the conference end date.

 

Important Dates

  • Paper Submission Deadline: 6 September 2023 
  • Reviews Available to Authors: 9 November 2023
  • Author Response Period: 9-15 November 2023 
  • Paper Acceptance Notification: 13 December 2023 
  • Camera Ready Paper Deadline: 11 January 2024 
     

ICASSP 2024 is a flagship event of the IEEE Signal Processing Society, a global network of signal processing and data science professionals. A membership to IEEE SPS connects you with more than 18,000 researchers, academics, industry practitioners, and students advancing and disseminating the latest breakthroughs and technology. By joining, you’ll receive significant savings on registration to future events, including ICASSP 2024, as well as access to highly-ranked journals, continuing education materials, and a robust technical community. Learn more about how you can save and grow with us! 

 
22-TA-1-009_SPS_75Years_TM_Lockup_Color_RGB
eabbc00a-6177-4d3d-892c-d36bb445eaa4
 
Back  Top

3-3-3(2024-04-14) CfP Industry Talk and Industry Colloquium Proposals @ICASSP 2024, Seoul, Korea
Submit your Industry Talk and Industry Colloquium Proposals by February 8.

 

 

 

 

 

Call for ICASSP 2024 Industry Program Participation!

Proposals for Spotlight Talks and Industry Colloquiums are due February 8.

The Organizing Committee of ICASSP 2024 invites proposals for the Industry Spotlight Talks and Industry Colloquiums to be held in conjunction with the 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing, taking place in Seoul, Korea, 14-19 April 2024. 

 

Submissions are due by 8 February 2024. Acceptance notifications will be sent out on 15 February 2024. 

     

Call for Spotlight Talk Proposals

The primary objective of this proposal is to present academic theories on various fields
within signal processing and illustrate how they are interconnected with industry. The aim is to demonstrate how these theories manifest in industry through presentations and demos, providing the audience with insights into the integration of academic theories with industrial applications.

 

This presentation will specifically focus on understanding the flow of industrialization. The content of the presentation will delve into standardization within industries, prototypes, industrial patent analysis, and technological entrepreneurship.


The Spotlight Talks will be a platform for industry professionals, researchers, and experts to share their thoughts on various aspects of the industry program. Even though the selection of presentation topics and styles is up to the speakers, there are strict restrictions on promoting company, products and services during the presentations. 

 

Learn more about the submission guidelines here.

Call for Industry Colloquiums

The ICASSP industrial colloquiums can be organized by an IEEE volunteer from both a non-sponsoring and sponsoring organization. 

 

Colloquium participants will have the opportunity to explore special topics and provide international forums for scientists, engineers, and researchers to exchange and share their experiences, new ideas, and research results on topics of current interest. The format of colloquiums will be determined by their organizer. There are strict restrictions on promoting company products and services during the presentations.

 

Learn more about the submission guidelines and proposal requirements here.

     

ICASSP 2024 is a flagship event of the IEEE Signal Processing Society, a global network of signal processing and data science professionals. A membership to IEEE SPS connects you with more than 18,000 researchers, academics, industry practitioners, and students advancing and disseminating the latest breakthroughs and technology. By joining, you’ll receive significant savings on registration to future events, including ICASSP 2024, as well as access to highly-ranked journals, continuing education materials, and a robust technical community. Learn more about how you can save and grow with us! 

Back  Top

3-3-4(2024-04-14) Grand Challenge @ICASSP 2024, Seoul, Korea

Announcing the Grand Challenges for ICASSP 2024!

Participate in a Grand Challenge at ICASSP 2024! The 49th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held in Seoul, Korea, from April 14 to April 19, 2024, at COEX.

 

View all 11 official ICASSP Grand Challenges below. To learn more about how to participate and important dates, please visit the challenges individual websites listed on the ICASSP website

     
Back  Top

3-3-5(2024-04-14) ICASSP 2024 Call for short courses, Seoul, Korea

ICASSP 2024 Call for Short Course Proposals

Submit your Proposals by 18 September 2023

ICASSP 2024, in collaboration with the IEEE Signal Processing Society (IEEE SPS) Education Board, is planning offerings of education short courses for in-person attendance at the conference. 

 

The 49th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held in Seoul, Korea, from April 14 to April 19, 2024, at COEX.

 

The education-oriented short courses will offer Professional Development Hours (PDHs) and Continuing Education Units (CEUs) certificates to those who complete each course.

 

Given that students, academics, and industry researchers and practitioners worldwide have a broad diversity of interests and areas of expertise, the IEEE SPS goal is to develop meaningful methods of offering beneficial and relevant courses in support of our members’ educational needs.

 

Learn more about the Short Course proposal requirements here

General Information 

Duration

Each course should have a total duration of 10 hours, distributed over 4 days, at 2.5 hours per day, or over 2 days at 5 hours per day during the conference.

 

Coverage

Short Courses should be different than tutorials and aim for a broader view covering a wide spectrum of ideas and results in their area, and not focusing only on research results from a specific individual or group. Both established and emerging domains are welcome, and we also encourage experiential, hands-on components that introduce methods and tools.

 

Target Audience

  • Students
  • Researchers from universities or research labs/centers and industry
  • Signal processing engineers and practitioners from industry
  • Hybrid combinations of the above
 Important Dates
  • Proposal Submission Deadline: 18 September 2023 
  • Acceptance Notification: 20 November 2023

Learn more about the Short Course submission guidelines and instructions here

Back  Top

3-3-6(2024-04-14) ICASSP 2024 Satellite Workshops

View all 15 official ICASSP Satellite Workshops below. Learn more about the individual workshops and participation guidelines here.

     

ICASSP 2024 Satellite Workshops

  • WS-1: Deep Neural Network Model Compression
  • WS-2: Trustworthy Speech Processing (TSP) - Still accepting papers 
  • WS-3: Self-supervision in Audio, Speech and Beyond (SASB)
  • WS-4: ICASSP 2024 Workshop on Explainable AI for Speech and Audio - Still accepting papers
  • WS-5: Workshop on Computational Imaging Using Synthetic Apertures
  • WS-6: Timely and Private Machine Learning over Networks
  • WS-7: Second Workshop on Signal Processing for Autonomous Systems (SPAS)
  • WS-8: Revolutionizing Interaction: Embodied Intelligence and the New Era of Human-Robot Collaboration
  • WS-9: SPID-CPS: Signal Processing for Intrusion Detection in Cyber-Physical Systems
  • WS-10: 1st Workshop on Integration of Sensing, Communication, and Computation (ISCC)
  • WS-11: Signal Processing and Machine Learning Advances in Automotive Radars
  • WS-12: Workshop on Radio Maps and Their Applications (RMA)
  • WS-13: Super-resolution integrated communications, localization, vision and radio mapping (SUPER-CLAM) - Still accepting papers
  • WS-14: Fearless Steps APOLLO: A Naturalistic Team based Speech Communications Community Resource (FS-APOLLO) - Still accepting papers
  • WS-15: Hands-free Speech Communication and Microphone Arrays (HSCMA 2024): Efficient and Personalized Speech Processing through Data Science
Back  Top

3-3-7(2024-04-14) Open Preview of ICASSP 2024 on IEEEXplore
Logo ICASSP 2024

 

Conférence internationale IEEE sur l'acoustique, le traitement du son et du signal

14-19 avril 2024 | COEX, Séoul, Corée

L'aperçu ouvert d'ICASSP 2024 est désormais disponible sur IEEE Xplore® !

Afin de maximiser la visibilité et l'impact le plus tôt possible, tous les articles acceptés pour la Conférence internationale de l'IEEE 2024  sur l'acoustique, la parole et le traitement du signal (ICASSP)  sont désormais publiés dans la bibliothèque numérique IEEE Xplore® via l'aperçu ouvert.



Disponible jusqu'au 19 avril 2024, vous pouvez désormais parcourir gratuitement tous les articles acceptés à l'ICASSP 2024.

 

Les matières ICASSP 2024 comprennent :

  • Systèmes de traitement du signal appliqué

  • Traitement du signal audio et acoustique

  • Imagerie biomédicale et traitement du signal

  • Détection de compression, modélisation clairsemée

  • Imagerie informatique

  • Vision par ordinateur

  • Deep Learning/Machine Learning pour le traitement du signal

  • Traitement d'images, de vidéos et de signaux multidimensionnels

  • Traitement du signal industriel

  • Forensique et sécurité de l'information

  • Internet des objets

  • Traitement du signal multimédia
  • Traitement du signal quantique

  • Télédétection et traitement du signal

  • Réseau de capteurs et SP multicanal

  • Traitement du signal pour le Big Data

  • Traitement du signal pour la communication

  • Traitement du signal pour la cybersécurité

  • Traitement du signal pour l'éducation

  • Traitement du signal pour la robotique

  • Traitement du signal sur des graphiques

  • Théorie et méthodes du traitement du signal

  • Traitement de la parole et du langage

     

ICASSP 2024 est un événement phare de l'IEEE Signal Processing Society, un réseau mondial de professionnels du traitement du signal et de la science des données. Une adhésion à l'IEEE SPS vous connecte avec plus de 18 000 chercheurs, universitaires, praticiens de l'industrie et étudiants qui font progresser et diffuser les dernières avancées et technologies. En vous inscrivant, vous bénéficierez d'économies importantes sur l'inscription aux événements futurs, y compris ICASSP 2024, ainsi que d'un accès aux revues de premier ordre, au matériel de formation continue et à une communauté technique solide.  Apprenez-en davantage  sur la façon dont vous pouvez épargner et croître avec nous ! 

Back  Top

3-3-8(2024-04-14) Registration ICASSP 2024, Seoul, Korea
Register by February 22 to save with the advance rate. 

 

 

 

 

 

Registration for ICASSP 2024 is now Open!

Register by 22 February 2024 to save with the advance rate.

Registration for the 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) is now open! Join us in Seoul, Korea, on 14-19 April 2024.

 

ICASSP 2024 will be an in-person conference. The in-person experience brings our community together in one location, supporting the vibrant exchange of ideas, networking, and social interaction.

 

The Organizing Committee is thrilled with anticipation to meet everyone at this flagship conference and hopes each attendee will engage in various technical sessions filled with valuable lectures, cutting-edge topic keynotes with world-renowned speakers, workshops, tutorials, industry programs, along with great opportunities to network with industry pioneers and leading researchers.

 

Important Registration Dates: 

  • Author Registration Deadline: 30 January 2024
  • Non-Author Advance Registration Deadline: 22 February 2024
     

Plan your Trip to Seoul! 

The Republic of Korea is a country visited by approximately ten million international travelers every year. With its long history in culture and tradition, the country has a lot to offer to travelers

 

Seoul, the capital city of the Republic of Korea, has been the center of the country for the long period of its own history from the prehistoric era to the present day. Seoul has been preserving its unique, beautiful cultural heritage and it is evolving into a highly advanced city through spectacular economic growth.

 

Some top attractions include the Starfield COEX Mall, Famille Station, the Banpo Hangang Park, and more! 

 

Learn more here and start planning your trip today.

img_27
img_5
     

ICASSP 2024 is a flagship event of the IEEE Signal Processing Society, a global network of signal processing and data science professionals. A membership to IEEE SPS connects you with more than 18,000 researchers, academics, industry practitioners, and students advancing and disseminating the latest breakthroughs and technology. By joining, you’ll receive significant savings on registration to future events, including ICASSP 2024, as well as access to highly-ranked journals, continuing education materials, and a robust technical community. Learn more about how you can save and grow with us! 

Back  Top

3-3-9(2024-04-23) Journée thématique 2024 du GAP sur le thème 'Geste, Parole Synthèse', Paris, France

 Journée thématique 2024 du GAP sur le thème 'Geste, Parole Synthèse' autour du projet ANR GEPETO ce Mardi 23 Avril 2024 de 9h45 à 12h15 à l'Institut Jean Le Rond D’Alembert, Sorbonne Université, Campus Jussieu.
Toutes les informations sur le site de la SFA : https://sfa.asso.fr/manifestations-sfa/journee-geste-parole-synthese/
Cette page sera régulièrement mise à jour, avec prochainement le détail du programme avec les résumés des présentations et les biographies des intervenants.

L'inscription est obligatoire, car le nombre de places est limité. Voici le lien pour accéder au formulaire d'inscription : https://framaforms.org/inscription-a-la-journee-gap-sfa-du-mardi-23-avril-2024-1676538860

Back  Top

3-3-10(2024-04-26) Journée d'étude sur la didactique de la phonétique en FLE , Université d'Aix Marseille, France

Le Laboratoire Parole et Langage, le département de FLE et le Service universitaire des langues d’Aix-Marseille Université organisent une Journée d'étude sur la didactique de la phonétique en FLE le vendredi 26 avril 2024. Le focus sera sur le FLE mais nous traiterons également de l'enseignement de la phonétique en didactique des langues en général.

 

Le programme sera bientôt disponible, le contenu sera accessible y compris pour les non spécialistes.

 

L'entrée est libre mais nous demandons aux participants de s'inscrire à l'avance: https://bit.ly/49cV2qF

 

La journée sera en présentiel uniquement.

Back  Top

3-3-11(2024-05-13) 13th International Seminar on Speech Production (ISSP2024), Autrans, France

Comme annoncé précédemment, le 13e Séminaire international sur la production vocale (ISSP2024) sera organisé du 13 au 17 mai 2024, à Autrans, en France, avec le soutien de plusieurs laboratoires français travaillant sur la recherche sur la production vocale. 

!ORGANISATION DE CONFÉRENCES : PRIORITÉ AUX INTERACTIONS EN PRÉSENTIEL !

Dans l'esprit de la plupart des éditions, nous avons choisi d'avoir une unité de lieu pour les échanges scientifiques et l'hébergement, et ce sera dans un centre de congrès en montagne près de Grenoble. La participation virtuelle sera possible mais avec de fortes limitations : (a) les participants à distance ne pourront se soumettre qu'à une présentation par affiche (sans interactions en direct), (b) seules les présentations orales sur site seront diffusées en direct, offrant des possibilités d'interactions avec les participants à distance, ( c) Les présentations par affiches sur place ne seront pas diffusées. 

Tous les résumés acceptés auront la possibilité d'être étendus sous forme d'article de 4 pages (à publier dans les actes de la conférence). Cet article facultatif de 4 pages ne sera pas examiné mais sera pris en compte pour une sélection ultérieure d'ouvrages à rassembler dans un numéro spécial d'une revue (à préciser).

!RENDEZ-VOUS IMPORTANTS!

15 décembre 2023 Date limite de soumission des résumés de 2 pages (le modèle sera fourni sur  https://issp24.sciencesconf.org/ )

1er février 2024 Notification d'acceptation

15 avril 2024 Date limite facultative de soumission des articles complets de 4 pages

13-17 mai , 2024 ISSP2024 à Autrans, France

!CONFÉRENCIERS ET SUJETS PRINCIPAUX !

Les sujets d'intérêt de cette conférence couvrent différents aspects de la production de la parole, notamment l'articulation, l'acoustique, les substrats neuronaux, le contrôle moteur, les troubles et leurs liens avec la perception, la communication, le développement et le langage.

Six keynotes seront présentées qui illustrent la diversité des sujets de recherche dans - et hors - le domaine de la production de la parole :  María Florencia Assaneo  (UNAM, Mexique),  Adrien Meguerditchian  (Aix-Marseille U., France),  Doris Mücke  (U. de Cologne, Allemagne),  Caroline Niziolek  (Université du Wisconsin-Madison, États-Unis),  Sophie Scott  (UCL, Royaume-Uni),  Jason Shaw  (Université de Yale, États-Unis).

 Pour des informations mises à jour, veuillez visiter régulièrement le site Web de notre conférence :  https://issp24.sciencesconf.org/  et suivez-nous sur Twitter/X @issp2024 !

Le comité organisateur,
Cécile Fougeron & Pascal Perrier (présidents)
accompagnés de Jalal Al-Tamimi, Pierre Baraduc, Véronique Boulenger, Mélanie Canault, Maëva Garnier, Fanny Guitard-Ivent, Anne Hermes, Fabrice Hirsch, Leonardo Lancia, Yves Laprie, Yohann Meynadier , Slim Ouni, Rudolph Sock, Béatrice Vaxelaire

 

Back  Top

3-3-12(2024-05-13) Workshop “Speech production models and empirical evidence from typical and pathological speech” , Grenoble, France

*** Atelier « Modèles de production de la parole et preuves empiriques à partir de la parole typique et pathologique » ***

Nous avons le plaisir de vous annoncer l'  atelier  « Modèles de production de la parole et preuves empiriques de la parole typique et pathologique » qui aura lieu le  lundi 13 mai  2024  à Grenoble de 10h à 16h. 

L'atelier est organisé dans le cadre du  projet ChaSpeePro  Sinergia FNS et vise à débattre, dans une ambiance conviviale et constructive, des positions théoriques et des preuves empiriques issues du  discours tant typique que pathologique  sur trois questions majeures : 

  1. Planification/programmation/exécution ou encodage phonologique/phonétique/moteur (ou autres distinctions encodage/informatique) : Comment définir les différents processus de production (motrice) de la parole ?
  2. Unités/représentations de codage dans les modèles de production vocale : lesquelles, combien d'unités différentes, et comment sont-elles sélectionnées et combinées en unités plus grandes ?
  3. Comment les différentes modulations vocales (chuchotées, fortes, rapides, claires, …) sont-elles codées/paramétrées pour la production ?

 

La journée sera organisée avec quatre conférences le matin et des tables rondes l'après-midi pour débattre de ces questions. Nous sommes heureux d’annoncer les participants invités suivants :

 

Discussions du matin :

              Frank Guenther , Université de Boston
               Ben Parrell , Université du Wisconsin-Madison
               Antje Mefferd , Université Vanderbilt

              Marina LaganaroCécile Fougeron  &   l'équipe ChaSpeePro

 

Modérateur/intervenants des tables rondes :

Louis Goldstein , Université de Californie du Sud
Monica Lancheros Pompeyo , Université de Génève
Hélène Lœvenbruck , Université Grenobles-Alpes
Doris Mücke , Université de Cologne
Caroline Niziolek , Université du Wisconsin-Madison
Pascal Perrier , Université Grenobles-Alpes
Wolfram Ziegler , LMU Munich

 

Les inscriptions seront gratuites mais limitées et ouvriront en février. Pour les mises à jour et l'inscription, consultez le  site Web de l'atelier

 

A noter qu'un transport de Grenoble à Autrans sera organisé après l'atelier pour les participants à  l'ISSP2024  qui débute le 13 au soir à Autrans.

 

Au plaisir de discussions fructueuses avec de nombreux participants enthousiastes ! 

 

Le comité organisateur,
Marina Laganaro, Cécile Fougeron, Maëva Garnier, Anne Hermes & Pascal Perrier
et l'équipe ChaSpeePro

 

Back  Top

3-3-13(2024-05-20) The 3rd Annual Meeting of the ELRA-ISCA Special Interest Group on Under-resourced Languages (SIGUL2024), Torino, Italy

1st Call for Papers

The 3rd Annual Meeting of the ELRA-ISCA Special Interest Group on Under-resourced Languages (SIGUL2024)

A Satellite Workshop of LREC-COLING 2024

Monday and Tuesday, May 20th-21st, 2024

Torino, Italy (co-located with LREC-COLING 2024)

Workshop website: https://sigul-2024.ilc.cnr.it (under construction)

 

The 3rd Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages (SIGUL2024) will provide a forum for the presentation and discussion of cutting-edge research in language processing for under-resourced languages by academic and industry researchers. Following the long-standing series of previous meetings, the SIGUL workshop will also offer a venue where researchers in different disciplines and from varied backgrounds can fruitfully explore new areas of intellectual and practical development while honoring their common interest of sustaining less-resourced languages.

Topics

We invite contributions (regular long papers of 8 pages or short papers of 4 pages) targeting any of the following - non-exhaustive - list of topics:

  • Processing any under-resourced languages (covering less-resourced, under-resourced, endangered, minority, and minoritized languages)
  • Cognitive and linguistic studies of under-resourced languages
  • Fast resources acquisition: text and speech corpora, parallel texts, dictionaries, grammars, and language models
  • Zero and few-shot methodologies and self-supervised learning in language and speech technologies
  • Cross-lingual and multilingual acoustic and lexical modeling
  • Speech recognition and synthesis for under-resourced languages and dialects
  • Machine translation and speech-to-speech translation
  • Spoken dialogue systems
  • Applications of language technologies for under-resourced languages
  • Large language models and under-resourced languages

Special Topic

  • Text and speech resources and technologies for the languages of Italy

Special Session on languages of Italy and language technologies

Italy is known for its linguistic diversity that reflects its long and varied history. To celebrate it, SIGUL2024 will provide a special session or forum for researchers interested in developing language resources and technologies for the many languages of Italy (regional, minority, or heritage languages, including those of the neighboring countries).

Submissions

Authors can choose among three paper categories:

  • Regular long papers – up to eight (8) pages maximum*, presenting substantial, original, completed, and unpublished work.
  • Short papers – up to four (4) pages*, describing work-in-progress projects in the early stage of development, new resources, negative results, system demonstrations, and early-career/student work.
  • Position papers – up to eight (8) pages*, for reflective considerations of methodological, best practice, and institutional issues (e.g., ethics, data ownership, speakers’ community involvement, de-colonizing approaches).

The above page limits exclude any number of additional pages that may be needed for references.

The form of the presentation may be oral or poster, whereas in the proceedings there is no difference between the accepted papers. Submission is NOT anonymous, and the official LREC-COLING 2024 format must be adopted. Each paper will be reviewed by three independent reviewers.

Invited speakers

TBA

Important Dates

  • 26 February 2024: submission due
  • 18 March 2024: reviews due
  • 22 March 2024: notifications to authors
  • 5 April 2024: camera-ready (PDF) due

Identify, Describe and Share your LRs!

When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC-COLING authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones).

Workshop Organizers

Maite Melero, Sakriani Sakti, Claudia Soria

Program Committee

  • Mohammad A. M. Abushariah (The University of Jordan, Jordan)
  • Manex Agirrezabal (University of Copenhagen – Center for Sprogteknologi | Center for Language Technology, Denmark)
  • Shyam S. Agrawal (KIIT, Gurugram, India)
  • Begoña Altuna (HiTZ Center - Ixa, Euskal Herriko Unibertsitatea | University of the Basque Country, Spain)
  • Antti Arppe (University of Alberta, Canada)
  • Martin Benjamin (Kamusi Project International)
  • Delphine Bernhard (Université de Strasbourg, LiLPa, France)
  • Steven Bird (Charles Darwin University, Australia)
  • Claudia Borg (University of Malta)
  • Matt Coler (University of Groningen, Campus Fryslân, The Netherlands)
  • Dan Cristea (Romanian Academy, Romania)
  • Pradip Kumar Das (IIT Guwahati, India)
  • Seza Doğruöz (Universiteit Gent, België | Ghent University, Belgium)
  • Stefano Ghazzali (Language Technologies Unit Bangor University
  • Prifysgol Bangor | Bangor University, Bangor, Gwynedd)
  • Itziar Gonzalez-Dios (HiTZ Basque Center for Language Technologies -  Ixa, University of the Basque Country UPV/EHU)
  • Lars Hellan (Norwegian University of Science and Technology, Norway)
  • Mélanie Jouitteau (IKER, CNRS, France)
  • Richard Littauer (unaffiliated)
  • Teresa Lynn (Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates)
  • Nina Markl (University of Essex, UK)
  • Maite Melero (Barcelona Supercomputing Center, Espanya | Spain)
  • Peter Mihajlik (Budapest University of Technology and Economics, Hungary)
  • Win Pa Pa (UCS Yangon, Myanmar)
  • Sandy Ritchie (Google Research)
  • Sakriani Sakti (JAIST, Japan)
  • Claudia Soria (CNR-ILC, Italia | Italy)
  • Daan Van Esch (Google Research)
  • Menno van Zaanen (South African Centre for Digital Language   Resources, South Africa)
  • Jenifer Vega Rodriguez (GIPSA-lab, Université Grenoble Alpes, France)
  • Marcely Zanon Boito (NAVER Labs Europe, France)

Contact

mailto:mclaudia.soria@ilc.cnr.it

Please, write “SIGUL2024” in the subject of your e-mail.

Back  Top

3-3-14(2024-05-20) The 8th Workshop on Cognitive Aspects of the Lexicon (CogALex-VIII), Torino, Italy

The 8th Workshop on Cognitive Aspects of the Lexicon (CogALex-VIII)

 

co-located with LREC-COLING 2024
https://lrec-coling-2024.org/about-lrec-coling/

 

location: Torino, Italy

date of the workshop: May 20, 2024

 

website : https://sites.google.com/view/cogalex-viii-2024

submission:  https://softconf.com/lrec-coling2024/cogalex2024/

 

 

1. Goal

The way we look at the lexicon has changed dramatically over the last few decades. While in the past being considered as an appendix to grammar, the lexicon has now moved to the center stage. Indeed, there is hardly any task in NLP that can be conducted without it. Also, many new proposals have emerged during the last few years. Living in a fast-moving world, it is hard for anyone to stay on top of the wave. Hence the reason for organizing an event like this.

The goal of this workshop is to provide builders and users of lexical resources (researchers in NLP, psychologists, computational lexicographers) a forum to share their knowledge and needs concerning the construction, organization, and use of a lexicon by people (lexical access) and machines (NLP, IR, data mining).

Like in the past, we invite researchers to address unsolved problems concerning the lexicon, by considering this time, however, also Large Language Models (LLMs). More precisely, we would like to explore their potential for building and using lexical resources as well as their ability to deal with the cognitive aspects of the lexicon.

We solicit contributions including, but not limited to, the topics listed below, topics, which can be considered from any of the following points of view: 

  • traditional-, computational- or corpus linguistics,
  • neuro- or psycholinguistics (tip of the tongue problem, word associations), 
  • mathematics (vector-based approaches, graph theory, small-world problems), etc.

 

2. Possible Topics

  • The potential of Large Language Models for the creation and use of lexical resources;
  • Organization, i.e., structure of the lexicon;
  • The meaning of words and how to reveal it;
  • Analysis of the conceptual input given by a dictionary user;
  • Methods for crafting dictionaries or indexes;
  • Creation of new types of dictionaries;
  • Dictionary access (navigation and search strategies), interface issues

For more details see: https://sites.google.com/view/cogalex-viii-2024

 

3. Important dates:

  • Submission deadline:          February 23, 2024
  • Date of notification:         March 20, 2024
  • Camera-ready deadline:        March 29, 2024
  • COGALEX workshop:             May 20, 2024

 

1.4. Submissions 

Two types of submissions are invited: 

  • Full papers:  should not exceed eight (8) pages of text, plus unlimited references. These are intended to be reports of original research.
  • Short papers:  may consist of up to four (4) pages of content, plus unlimited references. Appropriate short paper topics include preliminary results, application notes, descriptions of work in progress, etc.

  

Dual submission policy: papers may NOT be submitted to the workshop if they are or will be concurrently submitted to another meeting or publication.

Submissions must be anonymous, electronic, and in PDF format. They must be made via SOFTCONF:
https://softconf.com/lrec-coling2024/cogalex2024/
.

 

To create your document, please follow the guidelines defined by COLING using their style sheets
(https://lrec-coling-2024.org/authors-kit/).

 

5. Invited Speaker


Gilles-Maurice de Schryver (Ghent University, Belgium,https://tshwanedje.com/members/gmds/cv.html

Tentative title: ‘Fine-tuning LLMs for lexicography’

 

6. Workshop Organizers

 



Back  Top

3-3-15(2024-05-20)CfP LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, Torino, Italy

LREC-COLING 2024
La Conférence internationale conjointe 2024 sur la linguistique informatique, les ressources linguistiques et l'évaluation
Centre de conférences Lingotto - Turin (Italie)

20-25 mai 2024

https://lrec-coling-2024.lrec-conf.org

 

Twitter :  @LrecColing2024

The online registration to LREC-COLING 2024 Main conference, Workshops and Tutorials is
now open @ https://cvent.me/egYlAz

Early-Bird Deadline: April 15, 2024 (23:59 AoE)
Fees and Registration Policy: https://lrec-coling-2024.org/registration/

LREC-COLING 2024 Contacts:

    General contact: contact@lrec-coling-2024.org

    Invitation and visa letters: lreccoling24-local-chairs-l@ufal.mff.cuni.cz

    ELRA membership, payment, invoices: ELRASecretariat@lrec-coling-2024.org

    Scientific programme and Main conference papers:
lreccoling24-program-chairs-l@ufal.mff.cuni.cz /
lreccoling24-general-chairs-l@ufal.mff.cuni.cz

    Workshops and Tutorials:
    lreccoling24-all-workshop-chairs-l@ufal.mff.cuni.cz
    lreccoling24-tutorial-chairs-l@ufal.mff.cuni.cz


https://lrec-coling-2024.org/
Follow us on Twitter: @LrecColing <https://twitter.com/recColing>

Premier appel à communications 

Deux acteurs internationaux clés dans le domaine de la linguistique computationnelle, l'ELRA Language Resources Association (ELRA) et l'International Committee on Computational Linguistics (ICCL), unissent leurs forces pour organiser la Conférence internationale conjointe 2024 sur la linguistique computationnelle, les ressources linguistiques et l'évaluation (LREC). -COLING 2024) qui se tiendra à Turin, en Italie, du 20 au 25 mai 2024.

RENDEZ-VOUS IMPORTANTS

(Toutes les dates limites sont 23h59 UTC-12h00 (« n'importe où sur Terre »)

  • 22 septembre 2023 : début de la période d’anonymat sur papier
  • 13 octobre 2023 : soumissions finales (documents longs, courts et de position)
  • 13 octobre 2023 : échéance des soumissions des propositions d'atelier/tutoriel
  • 22-29 janvier 2024 : période de réfutation des auteurs
  • 5 février 2024 : révision finale
  • 19 février 2024 : Notification d'acceptation
  • 25 mars 2024 : prêt à photographier
  • 20-25 mai 2024 : Conférence LREC-COLING2024

 SUJETS DE SOUMISSION

LREC-COLING 2024 invite à soumettre des articles longs et courts présentant des recherches substantielles, originales et inédites sur tous les aspects du langage naturel et du calcul, des ressources linguistiques (LR) et de l'évaluation, y compris la langue parlée et des signes et l'interaction multimodale. Les soumissions sont sollicitées dans cinq grandes catégories : (i) théories, algorithmes et modèles, (ii) applications PNL, (iii) ressources linguistiques, (iv) évaluation PNL et (v) sujets d'intérêt général. Les soumissions couvrant plusieurs catégories sont particulièrement les bienvenues.

(i) Théories, algorithmes et modèles

  • Discours et pragmatique
  • Explicabilité et interprétabilité des grands modèles de langage
  • Modélisation du langage
  • CL/PNL et théories linguistiques
  • CL/NLP pour la modélisation cognitive et la psycholinguistique
  • Apprentissage automatique pour CL/NLP
  • Morphologie et segmentation des mots
  • Sémantique
  • Marquage, fragmentation, syntaxe et analyse
  • Inférence textuelle

(ii) Applications PNL

  • Applications (dont BioNLP et eHealth, NLP à des fins juridiques, NLP pour les médias sociaux et le journalisme, etc.)
  • Dialogues et systèmes interactifs
  • Classification de documents, modélisation de sujets, recherche d'informations et recherche multilingue
  • Extraction d'informations, exploration de texte et dérivation de graphiques de connaissances à partir de textes
  • Traduction automatique des langues parlées/écrites/des signes et aides à la traduction
  • Analyse des sentiments, exploration des opinions et des arguments
  • Reconnaissance/synthèse vocale et compréhension du langage parlé
  • Génération, résumé et simplification du langage naturel
  • Réponse aux questions
  • Détection et analyse du discours offensant
  • Vision, robotique, acquisition multimodale et ancrée du langage

(iii) Conception, création et utilisation de ressources linguistiques : texte, parole, signe, geste, image, en données uniques ou multimodales/multimédias

  • Lignes directrices, normes, bonnes pratiques et modèles pour les LR, interopérabilité
  • Méthodologies et outils pour la construction, l'annotation et l'acquisition de LR
  • Ontologies, terminologie et représentation des connaissances
  • LR et Web sémantique (y compris données liées, graphiques de connaissances, etc.)
  • LR et crowdsourcing
  • Métadonnées pour les LR et balisage sémantique/contenu
  • LR dans des systèmes et des applications tels que l'extraction d'informations, la recherche d'informations, la recherche audiovisuelle et multimédia, la dictée vocale, la transcription de réunions, l'apprentissage des langues assisté par ordinateur, la formation et l'éducation, la communication mobile, la traduction automatique, la traduction vocale, le résumé, la recherche sémantique, exploration de texte, inférence, raisonnement, analyse des sentiments/exploration d'opinions, systèmes de dialogue (basés sur la parole), langage naturel et interactions multimodales/multisensorielles, chatbots, services à commande vocale, etc.
  • Utilisation des LR (multilingues) dans divers domaines d'application comme l'e-gouvernement, l'e-participation, l'e-culture, l'e-santé, les applications mobiles, les humanités numériques, les sciences sociales, etc.
  • Les LR à l’ère des réseaux de neurones profonds
  • Données et outils ouverts, liés et partagés, architectures ouvertes et collaboratives
  • Biais dans les ressources linguistiques
  • Besoins des utilisateurs, LT pour l'accessibilité

(iv) Méthodologies d’évaluation de la PNL

  • Méthodologies, protocoles et mesures d'évaluation de la PNL
  • Benchmarking des systèmes et des produits
  • Métriques d'évaluation dans l'apprentissage automatique
  • Évaluation de l'utilisabilité des interfaces utilisateur et des systèmes de dialogue basés sur HLT
  • Évaluation de la satisfaction des utilisateurs

(v) Thèmes d’intérêt général

  • Enjeux multilingues, couverture linguistique et diversité, langues moins dotées en ressources
  • Problèmes de réplicabilité et de reproductibilité
  • Enjeux organisationnels, économiques, éthiques et juridiques
  • Priorités, perspectives, stratégies dans les politiques nationales et internationales
  • Activités, projets et initiatives internationaux et nationaux

 

LREC-COLING 2024 sollicite des soumissions de haute qualité rédigées en anglais. Les soumissions de trois formes de communications seront prises en compte :

A. Articles longs réguliers - jusqu'à huit (8) pages maximum*, présentant des travaux substantiels, originaux, achevés et inédits.

B. Articles courts - jusqu'à quatre (4) pages*, décrivant une petite contribution ciblée, des résultats négatifs, des démonstrations de système, etc.

C. Exposés de position - jusqu'à huit (8) pages*, traitant des principaux sujets d'actualité, des défis et des questions en suspens, ainsi que de la fertilisation croisée entre la linguistique informatique et d'autres disciplines.

* À l'exclusion de tout nombre de pages supplémentaires pour les références, les considérations éthiques, les conflits d'intérêts, ainsi que les déclarations de disponibilité des données et des codes.

Les annexes ou les documents supplémentaires seront autorisés UNIQUEMENT dans la version finale prête à être photographiée, mais pas lors de la soumission, car les articles doivent être examinés sans qu'il soit nécessaire de se référer à des documents supplémentaires.

Les exemples linguistiques, le cas échéant, doivent être présentés dans la langue originale mais également traduits en anglais pour permettre l'accessibilité à un public plus large. 

Notez que les types d'articles sont des décisions prises orthogonalement à la forme finale finale de la présentation (c'est-à-dire orale ou par affiche).

RESPONSABILITÉS DE L'AUTEUR

Les articles doivent être des travaux originaux et inédits. Les articles doivent être anonymisés pour permettre une évaluation en double aveugle. Les soumissions ne doivent donc pas inclure les noms et affiliations des auteurs. Les soumissions doivent également éviter les liens vers des référentiels non anonymisés : le code doit être soumis soit comme élément supplémentaire dans la version finale de l'article, soit comme lien vers un référentiel anonymisé (par exemple,  Anonymous GitHub  ou  Anonym Share ). Les articles non conformes à ces exigences seront rejetés sans examen.

Si l'article est disponible en prépublication, cela doit être indiqué sur le formulaire de soumission mais pas dans l'article lui-même. De plus, LREC-COLING 2024 suivra la même politique que les conférences ACL établissant une période d'anonymat pendant laquelle la publication non anonyme de prépublications n'est pas autorisée.

Plus précisément, les soumissions directes au LREC-COLING 2024 ne pourront pas être mises à disposition en ligne (par exemple via un serveur de préimpression) sous une forme non anonymisée après le 22 septembre, 23h59 UTC-12h00 (pour arXiv, notez que cela fait référence à heure de soumission).

Cette politique contient également des instructions aux évaluateurs de ne pas noter les articles sous-estimés pour ne pas avoir cité des prépublications récentes. Il est demandé aux auteurs de citer, lorsque cela est possible, les versions publiées des articles plutôt que les versions préimprimées.

Les articles qui ont été ou seront à l'étude pour d'autres lieux en même temps doivent être déclarés au moment de la soumission. Si un article est accepté pour publication au LREC-COLING 2024, il doit être immédiatement retiré des autres lieux. Si un article en cours d'examen au LREC-COLING 2024 est accepté ailleurs et que les auteurs ont l'intention d'y procéder, le comité du LREC-COLING 2024 doit en être informé immédiatement.

DÉCLARATION ÉTHIQUE

Nous encourageons tous les auteurs qui soumettent leur travail au LREC-COLING 2024 à inclure une déclaration éthique explicite sur l'impact plus large de leur travail, ou d'autres considérations éthiques après la conclusion mais avant les références. La déclaration d'éthique ne comptera pas dans la limite de pages (8 pages pour les articles longs, 4 pages pour les articles courts).

EXIGENCE DE PRÉSENTATION

Tous les articles acceptés dans le volet principal de la conférence doivent être présentés à la conférence pour apparaître dans les actes, et au moins un auteur doit s'inscrire au LREC-COLING2024.

Tous les articles acceptés à la conférence principale devront soumettre une vidéo de présentation. La conférence sera hybride, l'accent étant mis sur l'encouragement de l'interaction entre les modalités en ligne et en personne, et les présentations pourront donc être sur place ou virtuelles.

 

 

Back  Top

3-3-16(2024-05-22) The Industry Day@LREC-COLING 2024 conference week, Turin, Italy.

The Industry Day will take place on May 22, 2024, during the LREC-COLING 2024 conference week in Turin (Italy).

As a joint conference, LREC and COLING wish to continue to provide a unique forum for researchers, industrials, and funding agencies from across a wide spectrum of areas to discuss issues and opportunities, find new synergies and promote initiatives for international cooperation, in support to investigations in language sciences, progress and innovation in language technologies and development of corresponding products, services and applications, and standards.

LREC-COLING 2024 invites proposals for the Industry Day to be held in conjunction with the Main Conference.

The objective of the Industrial Day is to devote time to industrial achievements and perspectives with presentations by the industrials of their applications and innovation in the field of AI, NLP, and Speech processing. This dedicated Day/Track is also designed to bridge the gap between academic research and real-world industry practices, including evaluation methodologies, and understand better the challenges, including ethics and data protection, and opportunities in the current industrial landscape. Finally, this Day is also meant as a networking platform for conference participants, experts, and professionals to foster collaborations.

Topics of interest

  •  (Large) Language Modelling
  • Integrated Systems and Applications
  • Dialogue, Conversational Systems, Chatbots, Human-Robot Interaction
  • Machine Learning Models and Techniques for Language Technologies
  • Applications of Language technologies less-resourced languages or in crisis and emergency time
  • Importance of language resources and building blocks
  • Policy issues, Ethics, Legal Issues, Bias Analysis
  • Evaluation and Validation Methodologies


Proposal Format

Please submit the following information:

·Presentation of the company/Short biography
·Title and Brief abstract of talk (150-200 words)
·Motivation: ex) Start-up company, Standardization, General Technology, Specific Technology associated with Products.
·Any specific requirements or considerations

Submission link: https://docs.google.com/forms/d/1wJnERzsTqucjKVAqCXKNm-u_piaed24gSj4H_NfWwhs/
 
Submission Deadline: February 29, 2024
Notification of acceptance: March 29, 2024

Contact: choukri@elda.org <mailto:choukri@elda.org>
LREC-COLING 2024: https://lrec-coling-2024.org/

Back  Top

3-3-17(2024-05-25)1st CfP LEGAL 2024 Legal and Ethical Issues in Human Language Technologies Workshop at LREC-COLING 2024, Turin, Italy

1st CfP LEGAL 2024
Legal and Ethical Issues in Human Language Technologies
Workshop at LREC-COLING 2024, Turin, Italy

https://legal2024.mobileds.de/

May 25, 2024


About the Workshop

2023 is likely to be remembered as a year dominated by discussions about
Artificial Intelligence (AI) and Large Language Models (LLM). These
technologies require data to be collected and utilized in unprecedented
amounts. Large sets of Language data are owned by stakeholders that are
not necessarily involved in the development of such technologies. To use
these sets for AI and LLM, it is essential to repackage and repurpose
them for such endeavor. Language data, despite their intangible nature,
are often subject to legal constraints which need to be addressed in
order to guarantee lawful access to and re-use of these data. In recent
years, considerable efforts have been made to adapt legal frameworks to
the advancements in technology while taking into account the interests
of various stakeholders. From the technological perspective, the strict
consideration of legal aspects imposes further questions besides pure
recording technology and participant consent. This arises in several key
elements:

- What is the Intellectual Proprietary status of Large Language sets,
the corresponding Large Language Models, and their potential outputs?
- How can identifying information used in deep learning be removed or
anonymized (and is this mandatory), how reliable are predictions/ models
based on anonymized data?
- Which impact does this have on the usability, computational costs?


The purpose of this full-day workshop is to build bridges between
technology and legal framework, and discuss current legal and ethical
issues in the human language technology sector.

Important Dates

Submission Deadline: March 4, 2024
Notification of Acceptance: March 30, 2024
Camera ready: April 5, 2024
Workshop Day: May 25, 2024

Topics

- Impact of statutory exceptions on text and speech data mining
practices in the field of Human Language Technologies.
- Impact of the regulatory environment at the international level (e.g.
EU Data Act, Digital Governance Act, Digital Services Act, AI Act; the
Chinese “2023 draft rules on generative AI”, the USA Blueprint for an AI
Bill of Rights and other international or national regulations) on the
circulation and use of language data.
- Legal issues related to the production and use of Large Language
Models (Intellectual Property, Data Governance and Data Protection aspects).
- Concrete applications as to how language technologies can help resolve
legal issues related to data collection, data sharing and data reuse.
- Ethical considerations related to personal data collection and re-use
- Trust and transparency in language and speech technologies
- Efficient anonymization techniques, and the related responsibility,
and their impact on usability and performance
- Re-identification issues/De-anonymization approaches and techniques
- Harmonizing differing perspectives of data scientists and legal
experts, worldwide


Submission

1500-2000 words extended abstracts are needed at first for submission.
The full papers will be published as workshop proceedings along with the
LREC-COLING main conference. For these, the instructions of the main
conference need to be followed @ https://lrec-coling-2024.org/authors-kit/

START Submission Page: https://softconf.com/lrec-coling2024/legal2024/

Identify, Describe and Share your LRs!

When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense, i.e.
also technologies, standards, evaluation kits, etc.) that have been used
for the work described in the paper or are a new result of your
research. Moreover, ELRA encourages all LREC-COLING authors to share the
described LRs (data, tools, services, etc.) to enable their reuse and
replicability of experiments (including evaluation ones).

Organizers

Ingo Siegert, OvG University Magdeburg (Germany)
Khalid Choukri, ELRA/ELDA (France)
Pawel Kamocki, IDS Mannheim (Germany)
Kossay Talmoudi, ELDA (France)

Back  Top

3-3-18(2024-06-03) 27èmes Rencontres Jeunes Chercheurs (RJC 2024), Paris, France

27èmes Rencontres Jeunes Chercheurs (RJC 2024)

Passage(s)

3 et 4 juin 2024

4, rue des Irlandais 75005 Paris

 

Chers et chères collègues,

 

Nous avons le plaisir de vous faire parvenir en pièce jointe l’appel à communications pour les 27èmes Rencontres Jeunes Chercheurs qui auront lieu les 3 et 4 juin 2024 à l’Université Sorbonne Nouvelle - Paris 3 (Maison de la Recherche) au 4, rue des Irlandais - 75005 PARIS.

 

Le thème sélectionné cette année est le suivant : « Passage(s)»

 

Les communications se feront en français.

Le format des communications orales sera de 20 minutes, puis 10 minutes de discussion.

 

Soumission des propositions :

Toute personne souhaitant réaliser une communication est invitée à soumettre un abstract d’un maximum de 3000 caractères espaces comprises (hors figure(s) et bibliographie) en français jusqu’au 6 février 2024 à 19h (heure de Paris). Les propositions de communication devront être déposées sur : https://rjc27.sciencesconf.org/ . En choisissant l’option “Nouveau dépôt” vous pourrez saisir vos données personnelles (nom, prénom, affiliation). La proposition de communication est anonyme, merci de ne pas mettre vos nom, prénom et affiliation universitaires dans le fichier PDF que vous allez joindre à votre proposition.

 

Calendrier

Date limite de soumission : 14 février 2024 étendue

Notification aux participants : Avril 2024

Dates du colloque : 3 et 4 juin 2024

 

 

Le Comité d’organisation des RJC 2024

PASSAGE(S) Les 3 et 4 juin 2024 à Paris Créées en 1998, les Rencontres Jeunes Chercheurs et Chercheuses en Sciences du Langage de l’ED 622 (Université Paris Cité et Université Sorbonne Nouvelle) offrent la possibilité aux jeunes chercheurs et chercheuses inscrit·es en Doctorat ou en Master Recherche de présenter leurs travaux sous forme de communications orales. Le thème retenu pour l'appel à communications de cette 27e édition est 'Passage(s)'. Par cette formulation, nous souhaitons attirer l’attention sur les changements continus ou discrets pouvant affecter les langues, la parole, les pratiques langagières. Au pluriel, les passages considèrent ces transferts comme une source d’emprunts et de défis réciproques, au plan interdisciplinaire, mais aussi des échanges entre monde scientifique et domaine public. Le thème des RJC 2024 ne s’inscrit dans aucun cadre critique ou théorique en particulier. Il a pour volonté de laisser libre cours aux différentes interprétations des participant·es à travers plusieurs pistes décrites à titre d’exemples. 1. Passage(s) du temps En diachronie, les passages peuvent renvoyer aux stades successifs de l’évolution d’une langue ; mais aussi aux moments charnières de transition, de bascule, de transformation. On pourra donc réfléchir aux genèses langagières, aux états de langue, aux styles d’époque, aux politiques linguistiques et à leur impact sur la revitalisation des langues en danger (Grinevald & Costa, 2010 ; Bennett, 2020). On pourra également inviter dans le débat le concept d’émergence (Adam, 2012), s’intéresser aux faits de rémanence. À l’échelle des locuteur·ices, le passage du temps affecte tous les aspects du langage et de la parole, de leur acquisition à leur déclin. On invite ici à réfléchir aux transitions que peuvent vivre les locuteur·ices : premières acquisitions d’une ou plusieurs langues, évolution de leurs répertoires langagiers (CECRL, 2000), pathologies (Busto-Crespo et. al., 2016) et vieillissement sain (Stathopoulos, 2011; Tremblay, 2019). 2. Passage(s) sociodiscursifs Au niveau sociolinguistique, on invite à réfléchir au code-switching (Hall & Nilep 2015) et au code-mixing (Auer 1999), aux phénomènes de contact de langue (Léglise & Alby 2013), ainsi qu’aux processus de traduction. Au niveau discursif, on peut s'intéresser aux effets de dialogisme (Bakhtine 1929) ou de représentation du discours autre (Authier-Revuz 2020). Enfin, on s’intéressera aussi aux passages du sens d’un mot à un autre dans le temps et dans le discours (Lecolle 2007), au conflit de définition ou à la resignification. 3. Passage(s) didactiques On pourra considérer l’acte d’enseignement-apprentissage comme passage ou transmission des savoirs. En nous focalisant sur les sujets, il est également possible de considérer le passage, pour l’apprenant·e, d’un niveau de maîtrise à un autre (progression) ou d’un statut d’apprenant·e à un statut de locuteur·ice et pour l’enseignant·e, du passage du statut de locuteur·ice au statut d’enseignant·e. On pourra également considérer la vulgarisation scientifique comme un passage transformatif (Véron 2021). 4. Passage(s) entre production et perception Nous appelons ici à la réflexion sur les passages entre les différents niveaux linguistiques et phonétiques entrant en jeu dans la production de la parole et sa perception : des phénomènes cognitifs et neurologiques en jeu dans la production d'un message et son énonciation à la perception de celui-ci (Levelt, 2001; Drager, 2010). On pourra également étudier la notion de changement d’état des articulateurs et les altérations vocales et de la parole, avec le conduit vocal en tant que passage physique du flux d'air. 5. Enjeux méthodologiques du(des) Passage(s) Cette perspective envisage le passage en tant qu’élément-clef de la démarche scientifique : de la théorie au terrain (Candea, 2017), de la donnée à l’abstraction, de l’expérimentation à la modélisation théorique, de l’hypothèse au résultat… On considérera alors que toute modélisation théorique est en elle-même traduction, c’est-à-dire passage d’un état à un autre. On pourra s’intéresser aux enjeux du passage d’un support de discours à un autre. On peut citer le passage de l’oral à l’écrit (transcription et grammatisation), ainsi que du brouillon au texte final en linguistique de l’écrit, du hors-ligne au numérique (Paveau, 2017). Le passage d’un objet d’étude marqué à un objet d’étude non marqué (Cameron 2014; Bucholtz 1999; Cesbron 2022) amène à construire les identités linguistiques non marquées socialement en objets d’analyse. Le TAL a contribué à de nombreux passages, comme du manuel à l'automatique avec la traduction ou bien l'annotation automatique de corpus (Balakrishnan & Lloyd-Yemoh, 2014). La numérisation des corpus manuscrits (OCRisation, HTRisation), les données synthétisées (parole, texte) et la reconnaissance de la langue ou du locuteur sont également concernés par ces passages (de l'analogique au numérique, du signal/texte au vecteur...). Bibliographie: Adam, J. (2012), Le modèle émergentiste en linguistique textuelle, L’information grammaticale 134, Paris, Peeters, p. 30-37. Auer, P. (1999). From codeswitching via language mixing to fused lects: Toward a dynamic typology of bilingual speech. International journal of bilingualism, 3(4), 309-332. Authier-Revuz, J. (2020). La Représentation du Discours Autre. Berlin, Boston : De Gruyter. Bakhtine, M., Problèmes de la poétique de Dostoïevski, Paris, Seuil, [1929] 1970. Balakrishnan, V., & Lloyd-Yemoh, E. (2014). Stemming and lemmatization: A comparison of retrieval performances. Lecture Notes on Software Engineering, 2(3), 262-267 Bennett, J. (2020). Mothering through language: gender, class, and education in language revitalization among Kaqchikel Maya women in Guatemala. Journal of Linguistic Anthropology, 30(2), 196-212. Bucholtz, M. (1999). You da man: Narrating the racial other in the production of white masculinity. Journal of sociolinguistics, 3(4), 443-460. Busto-Crespo, O., Uzcanga-Lacabe, M., Abad-Marco, A., Berasategui, I., García, L., Maraví, E., Aguilera-Albesa, S., Fernández-Montero, A., & Fernández-González, S. (2016). Longitudinal Voice Outcomes After Voice Therapy in Unilateral Vocal Fold Paralysis. Journal of Voice, 30(6), 767.e9-767.e15. Cameron, D. (2014). Straight talking: the sociolinguistics of heterosexuality. Langage et societe, 148(2), 75-93. Candea, M. (2017). La notion d’«accent de banlieue» à l’épreuve du terrain. GlottopoL, (29), 13-26. Cesbron, A. (2022). Are the straights ok? Analyse multimodale de la resignification discursive de l'hétérosexualité sur Twitter et Instagram: défis et limites de la construction et préparation d'un corpus de données numériques. Revista Heterotópica, 4, 70-94. Drager, K. (2010). Sociophonetic variation in speech perception, Language and Linguistics Compass 4(7), 473-480. Grinevald, C., & Costa, J. (2010). Langues en danger: le phénomène et la réponse des linguistes. Faits de langues, 35(1), 23-37. Hall, K., & Nilep, C. (2015). Code‐Switching, Identity, and Globalization. The handbook of discourse analysis, 597-619. Lecolle, M. (2007). Polysignifiance du toponyme, historicité du sens et interprétation en corpus. Le cas de Outreau. Corpus, (6), 101-125. Léglise, I., & Alby, S. (2013). Les corpus plurilingues, entre linguistique de corpus et linguistique de contact: réflexions et méthodes issues du projet CLAPOTY. Faits de langues, 41(1), 97-124. Levelt, W. J. M. (2001). Relations between Speech Production and Speech Perception: Some Behavioral and Neurological Observations. Language, Brain, and Cognitive Development. Emmanuel Dupoux, MIT Press, Cambridge, Massachusetts. Paveau, M. A. (2017). L'analyse du discours numérique. Dictionnaire des formes et des pratiques. Hermann. Stathopoulos, E. T., Huber, J. E., Sussman, J. E. (2011). Changes in Acoustic Characteristics of the Voice Across the Life Span: Measures from Individuals 4-93 Years of Age. Journal of Speech, Language, and hearing Research, vol.54, 1011-1021. Tremblay, P., Poulin, J., Martel-Sauvageau, V., Denis, C. (2019). Age-related deficits in speech production: from phonological planning to motor implementation. Experimental Gerontology, 126110695 Véron, L. (2021). « Twitta », « influenceuse », « intellectuelle », « communicante » ? Être enseignante-chercheuse sur Twitter. Tracés. Revue de Sciences humaines, (21), 29-50. Références complémentaires: Aguilar, J., Brudermann, C., Leclère, M. (2014). Langues, cultures et pratiques en contexte : interrogations didactiques. Paris : Riveneuve. Alegria, R., Vaz Freitas, S., & Manso, M. C. (2021). Efficacy of speech language therapy intervention in unilateral vocal fold paralysis – a systematic review and a meta-analysis of visual-perceptual outcome measures. Logopedics Phoniatrics Vocology, 46(2), 86‑98. Angouri, J. & Baxter, J. (eds.). (2021). The Routledge handbook of language, gender and sexuality. Routledge. Arnold, A. (2015). Voix et transidentité : changer de voix pour changer de genre ? Langage et société 151(1). 87-105. Baese-Berk, M. M. (2019). Interactions between speech perception and production during learning of novel phonemic categories Attention, Perception & Psychophysics, vol. 81, 981-1005. Burke, D. M. & Mackay, D. G. (1997). Memory, language and ageing. Phil. Trans. R. Soc. Lond. B, 352, 1845-1856. Chen, X., Dronjic, V., Helms-Park, R. (2016). Reading in a second language: Cognitive and psycholinguistic issues. New York: Routledge. Coetzee, A. W., Beddor, P. S., Styler, W.,Tobin, S., Bekker, I. & Wissing, D. (2022). Producing and perceiving socially indexed coarticulation in Afrikaans. Laboratory Phonology 13(1). 215-219. Costa, J. (2017). Revitalising language in Provence: A critical approach. John Wiley & Sons. Doury, M., & Micheli, R. (2016). Enjeux argumentatifs de la définition: l’exemple des débats sur l’ouverture du mariage aux couples de même sexe. Langages, (204), 121-138. Eckert, P. (2002). Constructing meaning in sociolinguistic variation. (Un)Imaginable Futures: Anthropology Faces the Next 100 Years. The Annual Meeting of the American Anthropological Association, New Orleans, November 20-24. Hatzidaki, A. (2013). A cognitive approach to translation: The psycholinguistic perspective. In A. Rojo & I. Ibarretxe-Antuñano (Ed.), Cognitive Linguistics and Translation: Advances in Some Theoretical Models and Applications (pp. 395-414). Berlin, Boston: De Gruyter Mouton. Leroy, S. (2004). De l'identification à la catégorisation: l'antonomase du nom propre en français (Vol. 57). Peeters Publishers. Li, C., Su, Y., & Liu, W. (2018, July). Text-to-text generative adversarial networks. In 2018 International Joint Conference on Neural Networks (IJCNN) (pp. 1-7). IEEE.Ramscar, M. (2022) Psycholinguistics and Aging. Oxford Research Encyclopedias, Linguistics. Molinié, M. (2023). Autobiographie, réflexivité et construction des savoirs en didactique des langues. Paris : L'Harmattan. Paveau, M. A. (2019). La blessure et la salamandre. Théorie de la resignification discursive. Samy, A. H., Rickford, J. R. & Ball, A. F. (eds.). (2016). Raciolinguistics: How language shapes our ideas about race. Oxford University Press. Teston, B. (2001). L’évaluation objective des dysfonctionnements de la voix et de la parole; 2e partie : Les dysphonies. Travaux interdisciplinaires du Laboratoire Parole et Langage, 20, 169‑232 Véronique, G. D. (2013). Émergence des langues créoles et rapports de domination dans les situations créolophones. In Situ. Revue des patrimoines, (20). Weiss, R. J., Skerry-Ryan, R. J., Battenberg, E., Mariooryad, S., & Kingma, D. P. (2021, June). Wave-tacotron: Spectrogram-free end-to-end text-to-speech synthesis. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) (pp. 5679-5683). IEEE. Xue, S. A., & Hao, G. J. (2003). Changes in the human vocal tract due to aging and the acoustic correlates of speech production: a pilot study. Journal of Speech, Language and Hearing research, 46(3), 689-701. Comité d’organisation: Léa Robin, Louise Wohmann-Bruzzo, Jean-Claude Mapendano Byamungu, Noémie Trovato, Carole Millot, Hélène Massis, Justin Jacobs, Jules Bouton, Manon Boutin-Charles, Anaïs Ligner Comité scientifique : M. Adda-Decker, J. Aguilar-Rio, A. Amelot, N. Audibert, M. Auzanneau, W. Ayres-Bennett, C. Badiou-Monferran, E. Beaumatin, I. Behr, T. Bertin, P. Boula de Mareuil, C. Brudermann, M. Candea, D. Capin, M. Causa, J-L. Chiss, I. Chitoran, J. Costa, L. Crevier-Buchman, J. David, M. DeChiara, C. Doquet, F. El Qasem, A. Elalouf, P. Faure, C., Fauth, M. Favriaud, S. Fedden, C. Fougeron, J-M. Fournier, I. Galleron, C. Gendrot, D. Gile, L. Greco, P. Halle, F. Isel, A. Lahaussois, M. Lammert, L Lansari, B. Leclercq, F. Lefeuvre, C. Leguy, R. Mahrer, N. Marignier, C. Masson, M. Molinié, A. Morgenstern, C. Muller, F. Neveu, G. Parussa, M-A. Paveau, C. Pillot-Loiseau, C. Pradeau, S. Prevost, N. Quint, S. Reboul-Touré, R. Ridouane, A. Salazar-Orvig, D. Savatovsky, L. Schmoll, G. Siouffi, V. Spaëth, S. Stratilaki, I. Taravella, P-Y. Testenoire, A. Valentini, C. Van Den Avenne, D. Van Raemdonck, H. Vassiliadou, M. B. Villar Diaz, P. Von Münchow, N. Yamaguchi, H-Y Yoo


Back  Top

3-3-19(2024-06-10) 3rd ACM International Workshop on Multimedia AI against Disinformation (MAD’24), Phuket, Thailand,

3e atelier international de l'ACM sur l'IA multimédia contre la désinformation (MAD'24)
Conférence internationale de l'ACM sur la récupération multimédia ICMR'24
Phuket, Thaïlande, 10-13 juin 2024
https://www.mad2024.aimultimedialab.ro/ https://easychair .org/my/conference?conf=mad2024 *** Appel à communications *** * Soumission des articles : 17 mars 2024 * Notification d'acceptation : 7 avril 2024 * Articles prêts à photographier : 25 avril 2024 * Atelier @ ACM ICMR 2024 : 10 juin 2024  La communication moderne ne repose plus uniquement sur les médias classiques comme les journaux ou la télévision, mais s'effectue plutôt sur les réseaux sociaux, en temps réel et avec des interactions en direct entre les utilisateurs. Toutefois, l’accélération de la quantité d’informations disponibles a également conduit à une augmentation de la quantité et de la qualité des contenus trompeurs, de la désinformation et de la propagande. A l'inverse, la lutte contre la désinformation, à laquelle participent quotidiennement agences de presse et ONG (entre autres) pour éviter les risques de distorsion de l'opinion des citoyens, est devenue encore plus cruciale et exigeante, notamment pour ce qui concerne des sujets sensibles comme la politique. , la santé et la religion. Les campagnes de désinformation exploitent, entre autres, des outils basés sur l'IA pour la génération et la modification de contenu : des contenus visuels, vocaux, textuels et vidéo hyperréalistes ont émergé sous le nom collectif de « deepfakes », et plus récemment avec l'utilisation de grands modèles linguistiques. (LLM) et les grands modèles multimodaux (LMM), minant la crédibilité perçue du contenu médiatique. Il est donc encore plus crucial de contrer ces avancées en concevant de nouveaux outils d’analyse capables de détecter la présence de contenus synthétiques et manipulés, accessibles aux journalistes et aux vérificateurs de faits, robustes et fiables, et éventuellement basés sur l’IA pour atteindre de plus grandes performances. Les futures recherches multimédias sur la détection de la désinformation reposent sur la combinaison de différentes modalités et sur l’adoption des dernières avancées en matière d’approches et d’architectures d’apprentissage profond. Cela soulève de nouveaux défis et questions qui doivent être résolus afin de réduire les effets des campagnes de désinformation. L'atelier, dans sa troisième édition, accueille les contributions liées à différents aspects de la détection, de l'analyse et de l'atténuation de la désinformation basée sur l'IA. Les sujets d'intérêt comprennent, sans s'y limiter : - Détection de la désinformation dans le contenu multimédia (par exemple, vidéo, audio, textes, images) - Méthodes de vérification multimodales - Détection des médias synthétiques et manipulés - Forensique multimédia - Diffusion et effets de la désinformation dans les médias sociaux - Analyse des campagnes de désinformation dans des domaines socialement sensibles - Robustesse de la vérification des médias contre les attaques contradictoires et les complexités du monde réel   


























- Équité et non-discrimination de la détection de la désinformation dans le contenu multimédia
- Expliquer les technologies de désinformation/détection de désinformation aux utilisateurs non experts
- Aspects temporels et culturels de la désinformation
- Partage et gouvernance des ensembles de données dans l'IA pour la désinformation
- Ensembles de données pour la détection de la désinformation et la vérification multimédia
- Ouvert ressources, par exemple, ensembles de données, outils logiciels
- Grands modèles linguistiques pour analyser et atténuer les campagnes de désinformation
- Grands modèles multimodaux pour la vérification des médias
- Systèmes et applications de vérification multimédia
- Techniques de fusion, d'assemblage et de fusion tardive de systèmes
- Cadres d'analyse comparative et d'évaluation


*** Directives de soumission ***

Lors de la préparation de votre soumission, veuillez respecter strictement les instructions de l'ACM ICMR 2024, afin de garantir la pertinence du processus d'examen et l'inclusion dans les actes de la bibliothèque numérique de l'ACM. Les instructions sont disponibles ici :  https://mad2024.aimultimedialab.ro/submissions/ .



*** Comité d'organisation ***

Cristian Stanciu, Université Politehnica de Bucarest, Roumanie
Luca Cuccovillo, Fraunhofer IDMT, Allemagne
Bogdan Ionescu, Université Politehnica de Bucarest, Roumanie
Giorgos Kordopatis-Zilos, Université technique tchèque de Prague, Tchéquie
Symeon Papadopoulos, Centre pour Research and Technology Hellas, Thessalonique, Grèce
Adrian Popescu, CEA LIST, Saclay, France
Roberto Caldelli, CNIT et Mercatorum University, Italie

L'atelier est soutenu dans le cadre du projet H2020 AI4Media - A European Excellence Centre for Media, Society and Democracy ( https:/ /www.ai4media.eu/ ), le projet Horizon Europe  vera.ai  - VERification Assisted by Artificial Intelligence ( https://www.veraai.eu/ ) et le projet Horizon Europe AI4Debunk - Outils participatifs d'assistance basés sur l'IA pour soutenir une ligne de confiance Activité des citoyens et démystification de la désinformation ( https://ai4debunk.eu/ ).


Au nom des organisateurs,

Cristian Stanciu
https://www.aimultimedialab.ro/

Back  Top

3-3-20(2024-06-10) ACM International Conference on Multimedia Retrieval, Dusit Thani Laguna Phuket, Phuket Island, Thailand,
Effectively and efficiently retrieving information based on user needs
is one of the most exciting areas in multimedia research. The Annual
ACM International Conference on Multimedia Retrieval (ICMR) offers a
great opportunity for exchanging leading-edge multimedia retrieval
ideas among researchers, practitioners and other potential users of
multimedia retrieval systems. ACM ICMR 2024 will take place in Phuket,
Thailand from the 10-13th June 2024. The conference venue is the Dusit
Thani Laguna Phuket, in Phuket Island.

ACM ICMR 2024 is calling for high-quality original papers addressing
innovative research in multimedia retrieval and its related broad
fields. The main scope of the conference is not only the search and
retrieval of multimedia data but also analysis and understanding of
multimedia contents, including community-contributed social data,
lifelogging data and automatically generated sensor data, integration
of diverse multimodal data, deep learning-based methodology and
practical multimedia applications.


Topics of Interest

-Multimedia content-based search and retrieval,
-Multimedia-content-based (or hybrid) recommender systems,
-Large-scale and Web-scale multimedia retrieval,
-Multimedia content extraction, analysis, and indexing,
-Multimedia analytics and knowledge discovery,
-Multimedia machine learning, deep learning, and neural networks,
-Relevance feedback, active learning, and transfer learning,
-Fine-grained retrieval for multimedia,
-Event-based indexing and multimedia understanding,
-Semantic descriptors and novel high- or mid-level features,
-Crowdsourcing, community contributions, and social multimedia,
-Multimedia retrieval leveraging quality, production cues, style, framing, and affect,
-Synthetic media generation and detection,
-Narrative generation and narrative analysis,
-User intent and human perception in multimedia retrieval,
-Query processing and relevance feedback,
-Multimedia browsing, summarization, and visualization,
-Multimedia beyond video, including 3D data and sensor data,
-Mobile multimedia browsing and search,
-Multimedia analysis/search acceleration, e.g., GPU, FPGA,
-Benchmarks and evaluation methodologies for multimedia analysis/search,
-Privacy-aware multimedia retrieval methods and systems,
-Fairness and explainability in multimedia analysis/search,
-Legal, ethical, and societal impact of multimedia retrieval research,
-Applications of multimedia retrieval, e.g., news/journalism, media, medicine, sports, commerce, lifelogs, travel, security, and environment.


Important Dates

Regular Paper submission: 01.02.2024
Demo Paper submission: 17.02.2024
Notification of Acceptance: 31.03.2024
Camera-Ready Due: 25.04.2024
Conference: 10 - 13.06.2024

############################

Unsubscribe:

MM-INTEREST-signoff-request@LISTSERV.ACM.ORG

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?GETPW1

Once you have created a password, you can log in and view or change your
subscription settings at:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?SUBED1=MM-INTEREST
 
Back  Top

3-3-21(2024-06-17) 'Madrid UPM Machine Learning and Advanced Statistics' summer school@Boadilla del Monte (Madrid), Spain

The Technical University of Madrid (UPM) will once more organize the 'Madrid UPM Machine Learning and Advanced Statistics' summer school. The summer school will be held in Boadilla del Monte, near Madrid, from June 17th to June 28th. This year's edition comprises 12 week-long courses (15 lecture hours each), given during two weeks (six courses each week). Attendees may register in each course independently. No restrictions, besides those imposed by timetables, apply on the number or choice of courses.

Early registration is now *OPEN*. Extended information on course programmes, price, venue, accommodation and transport is available at the school's website:

http://www.dia.fi.upm.es/MLAS

There is a 25% discount for members of Spanish AEPIA and SEIO societies. 

Please, forward this information to your colleagues, students, and whomever you think may find it interesting.

Best regards,

Pedro Larrañaga, Concha Bielza, Bojan Mihaljević and Laura Gonzalez Veiga.
-- School coordinators.

*** List of courses and brief description ***

* Week 1 (June 17th - June 23rd, 2024) *

1st session: 9:45-12:45
Course 1: Bayesian Networks (15 h)
      Basics of Bayesian networks. Inference in Bayesian networks. Learning Bayesian networks from data. Real applications. Practical demonstration: R.

Course 2: Time Series(15 h)
      Basic concepts in time series. Linear models for time series. Time series clustering. Practical demonstration: R.
     
2nd session: 13:45-16:45
Course 3: Supervised Classification (15 h)
      Introduction. Assessing the performance of supervised classification algorithms. Preprocessing. Classification techniques. Combining multiple classifiers. Comparing supervised classification algorithms. Practical demonstration: python.

Course 4: Statistical Inference (15 h)
      Introduction. Some basic statistical tests. Multiple testing. Introduction to bootstrap methods. Introduction to Robust Statistics. Practical demonstration: R. 

3rd session: 17:00 - 20:00
Course 5: Deep Learning (15 h)
      Introduction. Learning algorithms. Learning in deep networks. Deep Learning for Computer Vision. Deep Learning for Language. Practical session: Python notebooks with Google Colab with keras, Pytorch and Hugging Face Transformers.

Course 6: Bayesian Inference (15 h)
      Introduction: Bayesian basics. Conjugate models. MCMC and other simulation methods. Regression and Hierarchical models. Model selection. Practical demonstration: R and WinBugs.
     

* Week 2 (June 26th - June 28th, 2024) *

1st session: 9:45-12:45

Course 7: Feature Subset Selection (15 h)
      Introduction. Filter approaches. Embedded methods. Wrapper methods. Additional topics. Practical session: R and python.

Course 8: Clustering (15 h)
      Introduction to clustering. Data exploration and preparation. Prototype-based clustering. Density-based clustering. Graph-based clustering. Cluster evaluation. Miscellanea. Conclusions and final advice. Practical session: R.

2nd session: 13:45-16:45
Course 9: Gaussian Processes and Bayesian Optimization (15 h)
      Introduction to Gaussian processes. Sparse Gaussian processes. Deep Gaussian processes. Introduction to Bayesian optimization. Bayesian optimization in complex scenarios. Practical demonstration: python using GPytorch and BOTorch.
     
Course 10: Explainable Machine Learning (15 h)
      Introduction. Inherently interpretable models. Post-hoc interpretation of black box models. Basics of causal inference. Beyond tabular and i.i.d. data. Other topics. Practical demonstration: Python with Google Colab.
         
3rd session: 17:00-20:00
Course 11:  SVMs, Kernel Methods and Regularized Learning (15 h)
      Regularized learning. Kernel methods. SVM models. SVM learning algorithms. Practical session: Python Anaconda with scikit-learn.
     
Course 12: Hidden Markov Models (15 h)
      Introduction. Discrete Hidden Markov Models. Basic algorithms for Hidden Markov Models. Semicontinuous Hidden Markov Models. Continuous Hidden Markov Models. Unit selection and clustering. Speaker and Environment Adaptation for HMMs. Other applications of HMMs. Practical session: HTK.

Back  Top

3-3-22(2024-06-20) Colloque international Nouvelles perspectives d'analyse musicale de la voix,Université Lumière Lyon2 France,

                                            Colloque international

           « Nouvelles Perspectives d’analyse musicale de la voix »

              Université Lumière Lyon 2, Lyon, 20-21 juin 2024

                                    APPEL À COMMUNICATIONS

 

Thématiques suggérées (liste non-limitative) :

• Analyse structurelle de la voix chantée ou du parlé musicalisé.

• Techniques d'analyse harmonique et mélodique appliquées à la voix.

• Méthodes et techniques d’analyse de la voix.

• Nouvelles perspectives technologiques et computationnelles d'analyse de la voix.

• Approches stylistiques ou rhétoriques dans l'analyse de la voix.

• Exploration acoustique, physiologique et interdisciplinaire de techniques vocales spécifiques, d’effets interprétatifs ou de modalités variées d’utilisation de la voix.

• Étude du rythme, du timbre vocal, du phrasé, etc.

 

 

Modalités de soumission : Nous vous invitons à soume8re votre proposi=on de communica=on avant le 1ER FÉVRIER 2024. Les propositions, qui devront comporter un résumé (2500 signes maximum, en français ou en anglais) et une courte notice bio-bibliographique, seront à faire parvenir conjointement à Antoine Petit (antoine.petit@univ-lyon2.fr) et Céline Chabot-Canet (celine.chabot-canet@univ-lyon2.fr). Les réponses seront communiquées au plus tard le 8 février 2023. Ce colloque donnera lieu à une publication des actes. Comité scientifique : Céline Chabot-Canet, Muriel Joubert, Antoine Petit, Axel Roebel, Catherine Rudent. Comité d’organisation : Antoine Petit (doctorant), Céline Chabot-Canet (MCF), Passages Arts & Li8ératures (XX-XXI), Université Lumière Lyon 2. Dans le cadre du projet ANR « Analyse et tRansformation du Style de chant » (ANR-19-CE38-0001-03).

Back  Top

3-3-23(2024-07-01) CfAbstracts Workshop 'Prosodic features of language learners' fluency', Leiden, The Netherlands

Call for Abstracts for the workshop 'Prosodic features of language learners' fluency'

https://l2fluency.lst.uni-saarland.de/

 

This workshop is a satellite event of 'Speech Prosody' to be held in Leiden (The Netherlands) on 1st of July, 2024. Its aim is to bring together colleagues from two research communities to focus on speech fluency: spoken second/foreign language (L2) on the one hand and speech prosody on the other.

 

In the past, fluency was often ignored in speech prosody research (as reflected in the Handbook of Language Prosody (2022) and also in the Speech Prosody conferences). Moreover, fluency and timing are only rarely treated together with intonation-related aspects in L2 research. However, a broader ranging view on L2 sentence prosody would be beneficial to the construction of theories concerning the acquisition of L2 prosody and applications such as assessments in teaching, exercises for individual learning, assessments and automatic testing of spoken performances. Likewise, research of language learning does not seem to be very much integrated into speech prosody research. This concerns both theoretical and methodological aspects but also acquisition and annotation of learner data, e.g. in learner corpora.

 

Thus, the scope of the workshop includes topics like measuring fluency, assessment of fluency (human experts, non-experts, and machines), learner corpora and annotation of disfluencies, elements and combinations of disfluencies (e.g. filler particles, disfluent pauses, lengthenings, repetitions, repairs), varying degrees of fluency in different speech styles and tasks, fluency and L2 proficiency levels, intonational aspects of fluency, visual aspects of fluency (e.g. hand-arm gestures, eye-gazing, torso movement), teaching methods for fluency improvement in L2 speech production and perception.

 

Keynote speakers are Lieke van Maastricht (Radboud University Nijmegen) and Malte Belz (Humboldt University Berlin).

 

Interested colleagues are invited to submit a two-page abstract (first page for text, second page for illustrations, tables, and references) to be reviewed by an expert committee. Only oral presentations are planned. In addition to this workshop, we are discussing the possibility of editing a special (open) issue in a recognised journal (e.g. 'Journal of Second Language Pronunciation' or 'Studies in Second Language Acquisition') to which we would encourage presenters of workshop papers to contribute.

 

Important dates: abstract submission deadline: 8 April, notification of acceptance: 1 May, workshop day: 1 July 2024.

 

Organisers: Jürgen Trouvain, Bernd Möbius (both Saarland University) and Nivja de Jong (Leiden University)

 

Back  Top

3-3-24(2024-07-06) Speech Prosody Workshop -CROSSIN: Intonation at the Crossroads, Leiden, The Netherlands

Speech Prosody Workshop Announcement

 

CROSSIN: Intonation at the Crossroads

Speech Prosody Satellite Workshop, Leiden, Saturday 6 July 2024

 

WORKSHOP ANNOUNCEMENT AND CALL FOR POSTER PRESENTATIONS

Intonation is studied by different disciplines in which the research focus varies. One element these approaches have in common is that they must all address intonation meaning. This applies whether researchers are mostly interested in the phonological representation of intonation, its interaction with syntax, semantics, and pragmatics, or its role in communication and speech processing. These perspectives complement each other, yet it is often the case that research focusing on one does not give full consideration to the others: for instance, syntactic approaches to the role of intonation in expressing focus may overlook differences in phonological form in focus expression, while pragmatic approaches may assume that each meaning nuance is directly expressed by a different tune; conversely, studies on intonation phonetics and phonology do not always fully consider meaning. 

 

The aim of this workshop is to reach a more comprehensive view, by bringing together researchers working on intonation from different perspectives so they can enter into dialogue with and learn from each another. The main questions of the workshop are:

 

  1. What is the relationship between syntax, semantics, pragmatics, and intonation? Can we expect a one-to-one correspondence between intonation categories or tunes, on the one hand, and focus or other semantic or pragmatic functions, on the other?
  2. How can we best understand and model intonation meaning and intonation’s role in conversation and processing?

 

We invite abstracts addressing the questions above. The selected abstracts will be presented in a poster session. If there is sufficient interest, poster presentations will be published as a special issue or collection.

 

Keynote speakers: The workshop also includes invited talks by Stavros Skopeteas (Göttingen), Anja Arnhold (Alberta), and commentaries by James German (Aix-Marseille) and Claire Beyssade (Paris 8). The workshop will end with a general round-table discussion. For more information on the workshop, visit https://www.sprintproject.io/crossinworkshop  or http://tinyurl.com/y7zj8h5f .

 

Important dates: abstract submission deadline: 31 March; notification of acceptance: 30 April; workshop day: 6 July 2024

 

Abstract Guidelines

Abstracts should be written in English and should present original research not already submitted to Speech Prosody. The  text should not exceed one A4 page , though an additional page for references, examples, and figures may also be added. The following formatting conventions apply: Times New Roman font, size 12, 2.54 cm (1 inch) margins, single spacing. Submissions should be sent as anonymized pdf files to sprintonation@gmail.com by 31 March 2024 at 24:00 AoE. Please provide author details in your email.

 

Organizers: Amalia Arvaniti, Stella Gryllia, Jiseung Kim, Riccardo Orrico, Alanna Tibbs (Radboud University)

 

Back  Top

3-3-25(2024-07-08) 35ème Journées d’Études sur la Parole, Toulouse, France

Conférence JEP-TALN-2024

Du 8 au 12 juillet 2024

Toulouse, France

======================

 

Les équipes de recherche SAMoVA, MELODI et IRIS de l’Institut de Recherche en Informatique de Toulouse (IRIT, UMR 5505), l’équipe PLC du laboratoire Cognition, Langues, Langage, Ergonomie (CLLE, UMR 5263) et l’axe neurocognition langagière, linguistique et phonétique cliniques du laboratoire de NeuroPsychoLinguistique (LNPL, URI EA 4156) organisent conjointement à Toulouse les 35ème Journées d’Études sur la Parole (JEP), la 31ème Conférence sur le Traitement Automatique des Langues Naturelles (TALN) et la 26ème Rencontre des Étudiants Chercheurs en Informatique pour le Traitement Automatique des Langues, dénommée (RECITAL).

 

https://jep-taln2024.sciencesconf.org/

 

----------------------------------

 

Dates importantes (JEP-TALN-RECITAL) :

-   Soumission des articles : *** février 2024 (date définitive) ***

-   Notification aux auteurs : 25 avril 2024

-   Date de la conférence : 8 au 12 juillet 2024

- Proposition d atelier : ***22 février 2024 (date définitive) ***

 

 

Les thématiques de la conférence s’inscrivent dans les catégories suivantes, sans y être limitées pour autant.

 

TALN-RECITAL

-   Phonétique, phonologie, morphologie, étiquetage morphosyntaxique

-   Syntaxe, grammaires, analyse syntaxique, chunking

-   Sémantique, pragmatique, discours

-   Sémantique lexicale et distributionnelle

-   Aspects linguistiques et psycholinguistiques du TAL

-   Ressources pour le TAL

-   Méthodes d’évaluation pour le TAL

-   Applications du TAL (recherche et extraction d’information, question-réponse, traduction, génération, résumé, dialogue, analyse d’opinions, simplification, etc.)

-   TAL et multimodalité (parole, vision, etc.)

-   TAL et multilinguisme

-   TAL pour le Web et les réseaux sociaux

-   TAL et langues peu dotées

-   TAL et langue des signes

-   Implications sociales et éthiques du TAL

-   TAL et linguistique de corpus

-   TAL et Humanités numériques

 

JEP

-   Acoustique de la parole

-   Acquisition de la parole et du langage

-   Analyse, codage et compression de la parole

-   Applications à composantes orales (dialogue, indexation, etc)

-   Apprentissage d’une langue seconde

-   Communication multimodale

-   Dialectologie

-   Évaluation, corpus et ressources

-   Langues en danger

-   Modèles de langage

-   Parole audio-visuelle

-   Pathologies de la parole

-   Phonétique et phonologie

-   Phonétique clinique

-   Production / Perception de la parole

-   Prosodie

-   Psycholinguistique

-   Reconnaissance et compréhension de la parole

-   Reconnaissance de la langue

-   Reconnaissance du locuteur

-   Signaux sociaux, sociophonétique

-   Synthèse de la parole

 

Le nombre de pages des soumissions pour JEP/TALN/RECITAL est libre, mais compris entre 6 et 10 pages (selon le détail de l’appel et hors références/annexes). Le principe est que la taille de la soumission doit être cohérente avec son contenu. Les relecteurs jugeront un article sur sa qualité et cette adéquation.

Les feuilles de style et le détail des appels sont disponibles sur le site web de la conférence : https://jep-taln2024.sciencesconf.org/

 

Lien de soumission https://easychair.org/conferences/?conf=jeptaln2024

Back  Top

3-3-26(2024-07-08) Appel à ateliers JEPTALN 2024, Toulouse, France

Appel à ateliers de JEPTALN 2024

Conférence JEPTALN 2024

8 - 12 juillet 2024

Dans le cadre des conférences conjointes JEPTALN2024, nous sollicitons des propositions d'ateliers. Les ateliers doivent porter sur une thématique particulière de traitement automatique des langues ou de la parole afin de rassembler quelques exposés plus ciblés que lors des conférences plénières.

 

Chaque atelier a son propre président et son propre comité de programme. Le responsable de l'atelier est chargé de la communication sur celui-ci, de l'appel à soumissions et de la coordination de son comité de programme.

 

Les organisateurs de JEPTALN2024 s'occuperont de la logistique (e.g. gestion des salles, pauses café et diffusion des articles).

 

Les ateliers auront lieu en parallèle durant une journée ou une demi-journée (2 à 4 sessions de 1h30) le lundi 8 juillet 2024 sur le campus de l’Université Jean Jaurès de Toulouse.

 

Dates importantes

-   Date limite de soumission des propositions d'atelier : 15 février 2024

-   Réponse du comité de programme : 29 février 2024

 

Modalités de proposition

Les propositions d'ateliers (1 à 2 pages A4 en format PDF) comprendront :

-   le nom et l'acronyme de l’atelier

-   une description synthétique du thème de l'atelier

-   le comité d'organisation

-   le comité scientifique provisoire ou pressenti

-   l'adresse du site web

-   la durée souhaitée pour la réalisation de l'atelier (1 journée

  ou 1/2 journée) et l'audience potentielle

 

Les propositions d'ateliers devront être envoyées sous forme électronique à jose.moreno@irit.fr et julie.mauclair@irit.fr avec pour entête de courriel : [Atelier JEP TALN 2024].

 

Modalités de sélection

Les propositions d'atelier seront examinées par des membres des comités de programme de JEP, TALN, par l’AFCP et le CPERM de l'ATALA. Les critères suivants seront considérés pour acceptation :

-   l'adéquation aux thèmes de l'une ou l'autre des conférences

-   l'originalité de la proposition

 

Format

Les conférences auront lieu en français (ou en anglais pour les non-francophones). Les articles soumis devront suivre le format de JEPTALN 2024 (nombre de pages à la discrétion du comité de programme de l'atelier). La soumission des versions finales devra suivre le calendrier de la conférence principale.

Back  Top

3-3-27(2024-07-08) Atelier Parole Spontanée lors des JEP-TALN 2024
*Atelier Parole Spontanée lors des JEP-TALN 2024*
 
La parole spontanée e est un type de parole se caractérisant principalement par son caractère non préparé, bien que les définitions de ce type de parole ne soient pas à l'heure actuelle consensuelles. Elle se distingue par des spécificités contraignantes son analyse tant perceptive qu'automatique, notamment par la présence abondante d'éléments dis fluents, et d'une variabilité plus importante qu'en parole contrainte de l'articulation, de la prosodie ou des niveaux linguistiques . Les systèmes de traitement automatique de la libération conditionnelle sont confrontés à  cet enjeu majeur. En effet, les hésitations , les pauses remplies, les répétitions , les corrections, les faux dé parts, la grammaire et la syntaxe particulière de l'oral , le registre de langue, les phénomènes de réduction et les modes les prosodiques sont autant de défis à  relever pour améliorer la pré décision et la fiabilité  des systèmes de traitement automatique de la parole. Pour réfléchir à  ces enjeux, le pré envoyé atelier vise à  mobiliser les connaissances et les expérimentations des acteurs de ce domaine en abordant une perspective interdisciplinaire. Pour cela, nous proposons de regrouper les savoirs et retours d'expérience issus de domaines d'application variés ayant recours à  ce type de parole, comme par exemple la parole pathologique, la parole d'apprenants (L1 ou L2), la parole lors de ré syndicats ou encore les applications visant à inclure les personnes en situation de handicap.       
   
 
*Format et organisation de l'atelier*
 
Les organisateurs proposent un dé roulement en trois é bandes principales :
une pré sentation d'un ét à de l'art sur les études en parole spontanée e donnée e par les organisateurs et une spécialiste en science du langage : [durée e pré vue 30 minutes] une session de posters permettant aux participants de pré envoyer à  tour de rô les contributions scientifiques plus pré cises (soumis à acceptation d'un curriculum vitae)  [durée e pré vue 1h30] une session de discussions/conclusion [ durée et pré vue 40 minutes] 

 
 
 
 
 
*Calendrier*
- Soumission des curriculum vitae par mail : 6 mai 2024
- Notification d'acceptation : 13 mai 2024
- Atelier : 8 juillet 2024, pendant la conférence JEP-TALN 2024 à Toulouse
 
 
 
 
*Comité d'organisation*
- Mathieu Balaguer (IRIT-Université Toulouse 3)
- Julie Mauclair (IRIT-Université Toulouse 3)
- Solène Evain (Laboratoire d'Informatique de Grenoble, Université Grenoble Alpes)
- Adrien Pupier (Laboratoire d'Informatique de Grenoble, Université Grenoble Alpes)
- Nicolas Audibert ( Laboratoire de Phonétique et Phonologie, Université Sorbonne Nouvelle)
Back  Top

3-3-28(2024-07-16) CfP 7th Laughter and Other Non-Verbal Vocalisations Workshop - Belfast, UK

Call for Papers: 7th Laughter and Other Non-Verbal Vocalisations
> Workshop - July 16-17 2024
>
>
> We are excited to announce the 7th Laughter and Other Non-Verbal
> Vocalisations Workshop (bit.ly/LaughterWorkshop2024) on July 16-17 at
> Queen’s University Belfast. The workshop will be a pre-conference
> event, part of the 2024 Conference of the International Society for
> Research on Emotion (www.isre2024.org).
>
> Non-verbal vocalisations in human-human and human-machine interactions
> play important roles in displaying social and affective behaviours and
> in managing the flow of interaction. Laughter, sighs, clicks, filled
> pauses, and short utterances such as feedback responses are among some
> of the non-verbal vocalisations that are being increasingly studied
> from various research fields. However, much is still unknown about the
> phonetic or visual characteristics of non-verbal vocalisations
> (production/encoding), their relations to the social actions they are
> part of, their perceived meanings (perception/decoding), and their
> ordering in interaction. Furthermore, with the increased interest for
> more naturalness in human-machine interaction, current times also
> invite exploring how these phenomena can be integrated in speech
> applications.
>
> Research themesinclude, but are not restricted to, these aspects of
> laughter and other non-verbal vocalisations:
>
>  *
>
>    Articulation, acoustics, and perception
>
>  *
>
>    Interaction and pragmatics
>
>  *
>
>    Affective and evaluative meanings
>
>  *
>
>    Social perception and organisation
>
>  *
>
>    Disfluency
>
>  *
>
>    Technology applications
>
> Researchers are invited to submit extended abstracts(2 pages long,
> including figures and references) describing their work, including work
> in progress. The deadline for submission is March 15th, 2024. More
> information about the submission process can be found on our website
> (bit.ly/LaughterWorkshop2024).
>
> There will be twokeynote presentations on the topics treated by the
> workshop, delivered by Prof. Carolyn McGettigan (University College
> London, UK) and Prof. Margaret Zellers (Kiel University, Germany).
>
> Looking forward to receiving your contributions and welcoming you at
> the workshop in July!

Back  Top

3-3-29(2024-07-22) 13th International Conference on Voice Physiology and Biomechanics, Erlangen, Germany

13th International Conference

on Voice Physiology and Biomechanics

Erlangen, Germany 22nd-26th of July 2024

 

 

we cordially invite you to participate in the 13th International Conference on Voice Physiology and Biomechanics, July 22nd – 26th of 2024!

After the successful hosting in 2012, we are pleased to welcome you back in Erlangen, Germany! There will be two days of workshops prior to the three days of conference and several social events in the beautiful Nuremberg Metropolitan Region.

The workshops (July 22nd-23rd) and the conference (July 24th-26th) will focus on voice physiology and biomechanics including computational, numerical and experimental modelingmachine learningtissue engineeringlaryngeal pathologies and many more. Abstract submission and registration will be open from November 1st, 2023.

We are looking forward to your contributions and to seeing you in Erlangen, July 2024!

Back  Top

3-3-30(2024-07-29) 'Conversational Grounding in the Age of Large Language Models,' @ TheEuropean Summer School in Logic, Language, and Information (ESSLLI) 2024, Leuven, Belgium
We are excited to announce an upcoming workshop, 'Conversational Grounding in the Age of Large Language Models,' to be held as part of the European Summer School in Logic, Language, and Information (ESSLLI) 2024. This workshop is dedicated to exploring the intricate and often overlooked mechanism of Conversational Grounding within dialogue systems. It's a vital process through which dialogue participants create, exchange, and apply shared knowledge. This mechanism relies on the sophisticated interplay of multimodal signals, including visual and acoustic cues, combined with inferential reasoning and dynamic feedback, all essential for achieving mutual understanding. The workshop is open to researchers and practitioners - both senior scholars and graduate students - from a variety of disciplines, including linguistics, cognitive science, and computer science.

Details:

When: July 29th - August 2nd, 2024 (week one of ESSLLI)
Hosted by: the European Summer School in Logic, Language, and Information <https://2024.esslli.eu/>
Where: Leuven, Belgium

Participants will be chosen on the basis of a 2-page extended abstract. For more information on how to submit, as well as registration details, please visit the workshop website: https://articulab.hcii.cs.cmu.edu/conversational-grounding-in-the-age-of-large-language-models/
Back  Top

3-3-31(2024-08-07) The 7th IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2024) , San Jose, CA, USA

The 7th IEEE International Conference on
Multimedia Information Processing and Retrieval (MIPR 2024)

August 7 – 9, 2024 San Jose, CA, USA

http://www.ieee-mipr.org
https://sites.google.com/view/mipr2024

Joint conference collocation with the IEEE International Conference on
Information Reuse and Integration for Data Science (IRI) 2024

A vast amount of multimedia data is becoming accessible, making the
understanding of spatial and/or temporal phenomena crucial for many
applications. This necessitates the utilization of techniques in the
processing, analysis, search, mining, and management of multimedia
data. The 7th IEEE International Conference on Multimedia Information
Processing and Retrieval (IEEE-MIPR 2024) will take place in San Jose,
CA, USA on August 7–9, 2024, to provide a forum for original research
contributions and practical system design, implementation, and
applications of multimedia information processing and retrieval. The
target audiences include university researchers, scientists, industry
professionals, software engineers and graduate students. The event
includes a main conference as well as multiple associated keynote
speeches, workshops, challenge contests, tutorials, and panels.

Topics

Generative and Foundation Models in Multimedia
- AI-generated Media
- Foundation Models in Vision
- Security of Large AI Models
- Multimodal Media Detection
- Generation and Detection with Diffusion Models
- Media Generation with Large Language Models
- Visual and Vision-Language Pre-training
- Generic Vision Interface
- Alignments in Text-to-image Generation
- Large Multimodal Models
- Multimodal Agents

Trustworthy AI in Multimedia
- AI Reliability for Multimedia Applications and Systems
- AI Fairness for Multimedia Applications and Systems
- AI Robustness for Multimedia Applications and Systems
- Attack and Defense for Multimedia Applications and Systems

Video/Audio in Multimedia
- Speech/Voice Synthesis
- Analysis of Conversation
- Speaker and Language Identification
- Audio Signal Analysis
- Spoken Language Generation
- Automatic Speech Recognition
- Spoken Dialogue and Conversational AI Systems

Vision and Content Understanding
- Multimedia Telepresence and Virtual/Augmented/Mixed Reality
- Visual Concept Detection
- Object Detection and Tracking
- 3D Modeling, Reconstruction, and Interactive Applications
- Multimodal/Multisensor Interfaces, Integration, and Analysis
- Effective and Scalable Solution for Big Data Integration
- Affective and Perceptual Multimedia

Multimedia Retrieval
- Multimedia Search and Recommendation
- Web-Scale Retrieval
- Relevance Feedback, Active/Transfer Learning
- 3D and Sensor Data Retrieval
- Multimodal Media (Images, Videos, Texts, Graph/Relationship) Retrieval
- High-Level Semantic Multimedia Features

Machine/Deep Learning/Data Mining
- Deep Learning in Multimedia Data and Multimodal Fusion
- Deep Cross-Learning for Novel Features and Feature Selection
- High-Performance Deep Learning (Theories and Infrastructures)
- Spatio-Temporal Data Mining
- Novel Dataset for Learning and Multimedia

Multimedia Systems and Infrastructures
- Multimedia Systems and Middleware
- Software Infrastructure for Data Analytics
- Distributed Multimedia Systems and Cloud Computing

Networking in Multimedia
- Internet Scale System Design
- Information Coding for Content Delivery

Data Management
- Multimedia Data Collection, Modeling, Indexing, or Storage
- Data Integrity, Security, Protection, Privacy
- Standards and Policies for Data Management

Novel Applications
- Multimedia Applications for Health and Sports
- Multimedia Applications for Culture and Education
- Multimedia Applications for Fashion and Living
- Multimedia Applications for Security and Safety

Internet of Multimedia Things
- Real-Time Data Processing
- Autonomous Systems (Driverless Cars, Robots, Drones, etc.)
- Mobile and Wearable Multimedia

User Experience and Engagement
- Quality of Experience
- User Engagement
- Emotional and Social Signals

Paper Submission: The conference will accept regular papers (6 pages),
short papers (4 pages), and demo papers (4 pages), including
references. Authors are encouraged to compare their approaches,
qualitatively or quantitatively, with existing work and explain the
strengths and weaknesses of the new approaches. The CMT online
submission site is at https://cmt3.research.microsoft.com/MIPR2024.
All accepted papers presented in MIPR 2024 will be published in the
conference proceedings which will also be available online at the IEEE
Xplore digital library. 

Important Dates:
- Paper (regular/short/demo) submission: April 15, 2024 Pacific Time
- Paper review available: May 8, 2024
- Notification of acceptance: May 20, 2024
- Camera-ready deadline: June 17, 2024

Back  Top

3-3-32(2024-09-06) 4th SPSC Symposium with 3rd Voice Privacy Challenge Workshop ( Satellite event Interspeech)

4 ème  Symposium SPSC 

avec 

3 ème  Atelier Défi VoicePrivacy

Demande de papiers


La parole devient un moyen de plus en plus important pour l'interaction homme-machine avec de nombreux déploiements dans les domaines de la biométrie, de la médecine légale et, surtout, de l'accès à l'information via des assistants vocaux virtuels. Parallèlement à ces développements, le besoin d'algorithmes et d'applications robustes et sécurisés qui protègent la sécurité et la confidentialité de l'utilisateur est apparu à l'avant-garde de la recherche et du développement basés sur la parole.  


La quatrième édition du Symposium sur la sécurité et la confidentialité dans la communication vocale, combinée cette année au  VoicePrivacy Challenge , se concentre sur la parole et la voix à travers lesquelles nous nous exprimons. Étant donné que la communication vocale peut être utilisée pour commander à des assistants virtuels de transporter des émotions ou de s'identifier, le symposium tente de répondre à la question de savoir comment renforcer la sécurité et la confidentialité des types de représentation vocale dans une interaction homme/machine centrée sur l'utilisateur. Le symposium constate donc que les échanges interdisciplinaires sont très demandés et vise à rassembler des chercheurs et des praticiens de plusieurs disciplines, plus précisément : le traitement du signal, la cryptographie, la sécurité, l'interaction homme-machine, le droit et l'anthropologie.


L'initiative VoicePrivacy est à la tête des efforts visant à développer des solutions de préservation de la confidentialité pour la technologie vocale. Il vise à consolider la communauté nouvellement formée pour développer la tâche et les mesures et évaluer les progrès réalisés dans les solutions d'anonymisation à l'aide d'ensembles de données, de protocoles et de mesures communs. VoicePrivacy prend la forme d’un défi compétitif. Conformément aux éditions précédentes du VoicePrivacy Challenge, l'édition actuelle se concentre sur l'anonymisation de la voix. Les participants doivent développer des systèmes d'anonymisation pour supprimer l'identité du locuteur tout en gardant intacts le contenu et les attributs paralinguistiques. Cette édition se concentre sur la préservation de l’état émotionnel, qui constitue l’attribut paralinguistique clé dans de nombreuses applications réelles de l’anonymisation vocale. Tous les participants sont encouragés à soumettre au symposium SPSC des articles liés à leur participation au défi, ainsi que d'autres articles scientifiques liés à l'anonymisation des locuteurs et à la confidentialité de la voix. Plus de détails peuvent être trouvés sur la page Web du VoicePrivacy Challenge :  https://www.voiceprivacychallenge.org/


Afin de renforcer les efforts pour les deux événements, faciliter les discussions communes et étendre les échanges interdisciplinaires, nous avons décidé de regrouper nos équipes et d'organiser un événement commun. Pour le colloque général, nous acceptons les contributions sur des sujets connexes, ainsi que les rapports d'avancement, la diffusion de projets ou les discussions théoriques et les « travaux en cours ». En outre, les invités du monde universitaire, de l'industrie et des institutions publiques ainsi que les étudiants intéressés sont invités à assister à la conférence sans avoir à apporter leur propre contribution. Toutes les soumissions acceptées apparaîtront dans les actes du symposium publiés dans les archives ISCA.


SUJETS SPSC

Les perspectives techniques incluent (sans s’y limiter) :

Les sciences humaines et les perspectives sociales comprennent (sans s’y limiter) :

  • Communication vocale préservant la confidentialité

    • Reconnaissance et traitement de la parole

    • Perception, production et acquisition de la parole

    • Synthèse de discours 

    • Codage et amélioration de la parole

    • Identification du locuteur et de la langue

    • Phonétique, phonologie et prosodie

    • Paralinguistique 

  • La cyber-sécurité

    • Ingénierie de la confidentialité et calcul sécurisé

    • Sécurité des réseaux et robustesse face à la concurrence

    • Sécurité mobile

    • Cryptographie

    • Biométrie

  • Apprentissage automatique

    • Apprentissage fédéré

    • Des représentations démêlées

    • Confidentialité différentielle

    • Apprentissage distribué

  • Traitement du langage naturel

    • Le Web comme corpus et ressources 

    • Marquage, analyse et analyse de documents

    • Discours et pragmatique

    • Traduction automatique 

    • Théories linguistiques et psycholinguistique

    • Inférence de sémantique et extraction d'informations

  • Interfaces homme-machine (la parole comme support)

    • Sécurité et confidentialité utilisables

    • Informatique omniprésente

    • Informatique omniprésente et communication

    • Sciences cognitives

  • Éthique et droit

    • Confidentialité et protection des données

    • Médias et communication

    • Gestion des identités

    • Commerce électronique mobile

    • Données dans les médias numériques

  • Humanités numériques

    • Études d'acceptation et de confiance

    • Recherche sur l'expérience utilisateur sur la pratique

    • Co-développement interdisciplinaire

    • Citoyenneté des données

    • Études futures

    • Éthique située

    • Perspectives STS



Soumission:

Les articles destinés au symposium SPSC doivent contenir jusqu'à huit pages de texte. La durée doit être choisie de manière appropriée pour présenter le sujet à une communauté interdisciplinaire. Les soumissions d'articles doivent être conformes au format défini dans les directives de préparation des articles et tel que détaillé dans le  kit de l'auteur . Les articles doivent être soumis via le système de soumission d'articles en ligne via le lien sur le   site Web du SPSC . La langue de travail de la conférence est l'anglais et les articles doivent être rédigés en anglais. Tous les articles acceptés seront publiés dans les archives ISCA aux côtés des articles Interspeech et des ateliers ISCA associés.


Commentaires: 

Au moins trois examens en double aveugle seront effectués et nous visons à obtenir les commentaires d'experts interdisciplinaires pour chaque soumission. Pour les contributions au VoicePrivacy Challenge, l’examen se concentrera sur les descriptions et les résultats des systèmes. 


Rendez-vous importants:

Date limite de soumission des articles longs (jusqu'à 8 pages, hors références) 

15 juin 2024

Articles courts (jusqu'à 4 pages, références incluses)

date limite de soumission 

15 juin 2024

Date limite de soumission des articles au VoicePrivacy Challenge (4 à 6 pages hors références)

15 juin 2024

Résultats du VoicePrivacy Challenge et description du système

15 juin 2024

Notification de l'auteur (article de défi)

5 juillet 2024

Notification de l'auteur (longue et courte)

30 juillet 2024

Soumission finale du document (prêt à photographier)

15 août 2024

Symposium

6 septembre 2024


Lieu: 

Le lieu du Symposium sera publié prochainement, nous prévoyons de le faire co-localiser avec Interspeech 2024. Une participation hybride est possible


Back  Top

3-3-33(2024-09-06) VoicePrivacy 2024 Challenge, Kos Island, Greece

*******************************************

VoicePrivacy 2024 Challenge

http://www.voiceprivacychallenge.org

  • Paper and results submission deadline: 15th June 2024

  • Workshop (Kos Island, Greece in conjunction with INTERSPEECH 2024): 6th September 2024

*******************************************

Dear colleagues,

The challenge task is to develop a voice anonymization system for speech data which conceals the speaker’s voice identity while protecting linguistic content and emotional states.

Registration is still open. We have released 4 new baselines that offer greater privacy protection, and the final list of data and pretrained models allowed to build and train your own anonymization system.

Please find more information in the updated VoicePrivacy 2024 Challenge Evaluation Plan: https://www.voiceprivacychallenge.org/docs/VoicePrivacy_2024_Eval_Plan_v2.0.pdf

VoicePrivacy 2024 is the third edition, which will culminate in a joint workshop held in Kos Island, Greece in conjunction with INTERSPEECH 2024 and in cooperation with The Fourth ISCA Symposium on Security and Privacy in Speech Communication.

Registration:

Participants are requested to register for the evaluation. Registration should be performed once only for each participating entity using the following form: Registration. You will receive a confirmation email within ~24 hours after successful registration, otherwise or in case of any questions please contact the organizers: organisers@lists.voiceprivacychallenge.org

Subscription:

To stay up to date with VoicePrivacy, please join

VoicePrivacy - Google Groups and VoicePrivacy (@VoicePrivacy) on X.

Sponsor:

Nijta

----------- 

Best regards,

The VoicePrivacy 2024 Challenge Organizers,

Pierre Champion - Inria,

France Nicholas Evans - EURECOM,

France Sarina Meyer - University of Stuttgart, Germany

Xiaoxiao Miao - Singapore Institute of Technology, Singapore

Michele Panariello - EURECOM, France

Massimiliano Todisco - EURECOM, France

Natalia Tomashenko - Inria, France

Emmanuel Vincent - Inria, France

Xin Wang - NII, Japan

Junichi Yamagishi - NII, Japan





 

Back  Top

3-3-34(2024-09-09) Cf Labs Proposals @CLEF 2024, Grenoble, France

Call for Labs Proposals @CLEF 2024

At its 25th edition, the Conference and Labs of the Evaluation Forum (CLEF) is a continuation of the very successful series of evaluation campaigns of the Cross Language Evaluation Forum (CLEF) which ran between 2000 and 2009, and established a framework of systematic evaluation of information access systems, primarily through experimentation on shared tasks. As a leading annual international conference, CLEF uniquely combines evaluation laboratories and workshops with research presentations, panels, posters and demo sessions. In 2024, CLEF takes place in September,  9-12 at the University of Grenoble Alpes, France.

Researchers and practitioners from all areas of information access and related communities are invited to submit proposals for running evaluation labs as part of CLEF 2024. Proposals will be reviewed by a lab selection committee, composed of researchers with extensive experience in evaluating information retrieval and extraction systems. Organisers of selected proposals will be invited to include their lab in the CLEF 2024 labs programme, possibly subject to suggested modifications to their proposal to better suit the CLEF lab workflow or timeline.

Background

The CLEF Initiative (http://www.clef-initiative.eu/) is a self-organised body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual information in different modalities - including text and multimedia - with various levels of structure. CLEF promotes research and development by providing an infrastructure for:

  1. independent evaluation of information access systems;

  2. investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access; 

  3. creation of reusable test collections for benchmarking; 

  4. exploration of new evaluation methodologies and innovative ways of using experimental data; 

  5. discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

Scope of CLEF Labs

We invite submission of proposals for two types of labs:

  1. “Campaign-style” Evaluation Labs for specific information access problems (during the twelve months period preceding the conference), similar in nature to the traditional CLEF campaign “tracks”. Topics covered by campaign-style labs can be inspired by any information access-related domain or task.

  2. Labs that follow a more classical “workshop” pattern, exploring evaluation methodology, metrics, processes, etc. in information access and closely related fields, such as natural language processing, machine translation, and human-computer interaction.

We highly recommend organisers new to the CLEF format of shared task evaluation campaigns to first consider organising a lab workshop to discuss the format of their proposed task, the problem space and practicalities of the shared task. The CLEF 2024 programme will reserve about half of the conference schedule for lab sessions. During the conference, the lab organisers will present their overall results in overview presentations during the plenary scientific paper sessions to give non-participants insights into where the research frontiers are moving. During the conference, lab organisers are expected to organise separate sessions for their lab with ample time for general discussion and engagement with all participants - not just those presenting campaign results and papers. Organisers should plan time in their sessions for activities such as panels, demos, poster sessions, etc. as appropriate. CLEF is always interested in receiving and facilitating innovative lab proposals. 

Potential task proposers unsure of the suitability of their task proposal or its format for inclusion at CLEF are encouraged to contact the CLEF 2024 Lab Organizing Committee Chairs to discuss its suitability or design at an early stage.

Proposal Submission

Lab proposals must provide sufficient information to judge the relevance, timeliness, scientific quality, benefits for the research community, and the competence of the proposers to coordinate the lab. Each lab proposal should identify one or more organisers as responsible for ensuring the timely execution of the lab. Proposals should be 3 to 4 pages long and should provide the following information:

  1. Title of the proposed lab.
     

  2. A brief description of the lab topic and goals, its relevance to CLEF and the significance for the field.
     

  3. A brief and clear statement on usage scenarios and domain to which the activity is intended to contribute, including the evaluation setup and metrics.
     

  4. Details on the lab organiser(s), including identifying the task chair(s) responsible for ensuring the running of the task. This should include details of any previous involvement in organising or participating in evaluation tasks at CLEF or similar campaigns.
     

  5. The planned format of the lab, i.e., campaign-style (“track”) or workshop.
     

  6. Is the lab a continuation of an activity from previous year(s) or a new activity?  

  1. For activities continued from previous year(s): Statistics from previous years (number of participants/runs for each task), a clear statement on why another edition is needed, an explicit listing of the changes proposed, and a discussion of lessons to be learned or insights to be made.

  2. For new activities: A statement on why a new evaluation campaign is needed and how the community would benefit from the activity.
     

  1. Details of the expected target audience, i.e., who do you expect to participate in the task(s), and how do you propose to reach them.
     

  2. Brief details of tasks to be carried out in the lab. The proposal should clearly motivate the need for each of the proposed tasks and provide evidence of its capability of attracting enough participation. The dataset which will be adopted by the Lab needs to be described and motivated in the perspective of the goals of the Labs; also indications on how the dataset will be shared are useful. It is fine for a lab to have a single task, but labs often contain multiple closely related tasks, needing a strong motivation for more than 3 tasks, to avoid useless fragmentation.
     

  3. Expected length of the lab session at the conference: half-day, one day, two days. This should include high-level details of planned structure of the session, e.g. participant presentations, invited speaker(s), panels, etc., to justify the requested session length.
     

  4. Arrangements for the organisation of the lab campaign: who will be responsible for activities within the task; how will data be acquired or created, what tools or methods will be used, e.g., how will necessary queries be created or relevance assessment carried out; any other information which is relevant to the conduct of your lab.
     

  5. If the lab proposes to set up a steering committee to oversee and advise its activities, include names, addresses, and homepage links of people you propose to be involved.

Lab proposals must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Proposals” track.

Reviewing Process

Each submitted proposal will be reviewed by the CLEF 2024 Lab Organizing Committee. The acceptance decision will be sent by email to the responsible organiser by 28 July 2023. The final length of the lab session at the conference will be determined based on the overall organisation of the conference and the number of participant submissions received by a lab.

 

Advertising Labs at CLEF 2023 and ECIR 2024

Organisers of accepted labs are expected to advertise their labs at both CLEF 2023 (18-21 September 2023, Thessaloniki, Greece) and ECIR 2024 (24-28 March 2024, Glasgow, Scotland). So, at least one lab representative should attend these events.

Advertising at CLEF 2023 will consist of displaying a poster describing the new lab, running a break-out session to discuss the lab with prospective participants, and advertising/announcing it during the closing session.

Advertising at ECIR 2024 will consist of submitting a lab description to be included in ECIR 2024 proceedings (11 October 2023) and advertising the lab in a booster session during ECIR 2024.

Mentorship Program for Lab Proposals from newcomers

CLEF 2019 introduced a mentorship program to support the preparation of lab proposals for newcomers to CLEF. The program will be continued at CLEF 2024 and we encourage newcomers to refer to Friedberg et al. (2015) for initial guidance on preparing their proposal:

Friedberg I, Wass MN, Mooney SD, Radivojac P. Ten simple rules for a community computational challenge. PLoS Comput Biol. 2015 Apr 23;11(4):e1004150.

The CLEF newcomers mentoring program offers help, guidance, and feedback on the writing of your draft lab proposal by assigning a mentor to you, who help you in preparing and maturing the lab proposal for submission. If your lab proposal falls into the scope of an already existing CLEF lab, the mentor will help you to get in touch with those lab organisers and team up forces.

Lab proposals for mentorship must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Mentorship” track.

Important Dates

  • 29 May 2023: Requests for mentorship submission (only newcomers)

  • 29 May 2023 - 16 June 2023: Mentorship period

  • 7 July 2023: Lab proposals submission (newcomers and veterans)

  • 28 July 2023: Notification of lab acceptance

  • 18-21 Sep 2023: Advertising Accepted Labs at CLEF 2023, Thessaloniki, Greece

  • 11 October 2023: Submission of short lab description for ECIR 2024

  • 13 November 2023: Lab registration opens

  • 24-28 March 2024: Advertising labs at ECIR 2024, Glasgow, UK

CLEF 2024 Lab Chairs

  • Petra Galuscakova, University of Stavanger, Norway

  • Alba García Seco de Herrera, University of Essex, UK

CLEF 2024 Lab Mentorship Chair

  • Liana Ermakova, Université de Bretagne Occidentale, France

  • Florina Piroi, TU Wien, Austria

Back  Top

3-3-35(2024-09-09) The CLEF Cross Language Image Retrieval Track, Grenoble, France
** Call for Participation **
 
As part of the ImageCLEF2024 Lab - https://www.imageclef.org/ (The CLEF Cross Language Image Retrieval Track), which is a part of the 15th edition of CLEF 2024 (https://clef2024.imag.fr/), scheduled to take place from September 9 to 12, 2024, in Grenoble, we are pleased to introduce the first edition of the ToPicto task.
 
The goal of ToPicto is to bring together the scientific community (linguists, computer scientists, translators, etc.) to develop new translation methods to translate either speech or text into a corresponding sequence of pictograms.
 
We propose two distinct tasks:
- Text-to-Picto focuses on the automatic generation of a sequence of terms (each associated with an ARASAAC pictogram - https://arasaac.org/) from a French text. This challenge can be seen as a translation problem, where the source language is French, and the target language corresponds to the terms associated with each French pictogram.
- Speech-to-Picto aims to translate an audio segment into a sequence of terms, each associated with an ARASAAC pictogram. The challenge here lies in the absence of using textual data as input.
 
More information is available here: https://www.imageclef.org/2023/topicto
The training data has just been made public; it's your turn to engage!
 
To participate, follow the instructions provided here: https://www.imageclef.org/2024#registration.
 
Registrations for the tasks are now open:
- Text-to-Picto: https://ai4media-bench.aimultimedialab.ro/competitions/18/
- Speech-to-Picto: https://ai4media-bench.aimultimedialab.ro/competitions/19/
 
Important dates:
- 22.04.2024 registration closes for all ImageCLEF tasks
- 01.04.2024 Test data release starts
- 01.05.2024 Deadline for submitting the participants runs
- 13.05.2024 Release of the processed results by the task organizers
- 31.05.2024 Deadline for submission of working notes papers by the participants
- 21.06.2024 Notification of acceptance of the working notes papers
- 08.07.2024 Camera ready working notes papers
- 09-12.09.2024 CLEF 2024, Grenoble, France
Back  Top

3-3-36(2024-09-18) CfP Special Session on 'Interactive Video Retrieval for Beginners (IVR4B)' @ CBMI 2024, Reykjavik, Iceland.

Appel à communications : session spéciale sur la « Récupération vidéo interactive pour les débutants (IVR4B) » au CBMI 2024

https://cbmi2024.org/?page_id=100#IVR4B

21e Conférence internationale sur l'indexation multimédia basée sur le contenu (CBMI 2024).

18-20 septembre 2024, Reykjavik, Islande -  https://cbmi2024.org/

Malgré les progrès dans la description automatisée du contenu utilisant l'apprentissage profond et l'émergence de modèles d'intégration conjoints image-texte, de nombreuses tâches de récupération vidéo nécessitent encore un utilisateur humain dans la boucle. Les systèmes de récupération vidéo interactive (IVR) répondent à ces défis. Afin d'évaluer leurs performances, des benchmarks de récupération multimédia tels que Video Browser Showdown (VBS) ou Lifelog Search Challenge (LSC) ont été établis. Ces benchmarks fournissent des ensembles de données à grande échelle ainsi que des paramètres de tâches et des protocoles d'évaluation, permettant de mesurer les progrès de la recherche sur les systèmes IVR. Cependant, afin d'obtenir les meilleures performances possibles des systèmes participants, ceux-ci sont généralement exploités par des membres de l'équipe de développement. Cette session spéciale vise à fournir de meilleures informations sur la façon dont de tels systèmes sont utilisables par des utilisateurs ayant une solide expérience en informatique, mais qui ne sont pas familiers avec les détails du système.
Les systèmes de récupération soumis seront présentés sous forme de démos (avec une affiche associée) et participeront à un concours pour novices. Les participants bénévoles, qui ne sont liés à l'équipe de développement d'aucun système IVR participant, mais qui ont vu les systèmes lors de la session de démonstration, les utiliseront pour résoudre un petit nombre de tâches de démonstration du navigateur vidéo.
Dates importantes :

Soumission des articles : 5 avril 2024
Notification d'acceptation : 3 juin 2024
Conférence CBMI : 18-20 septembre 2024

Organisateurs :
• Werner Bailer, JOANNEUM RESEARCH, Autriche
• Cathal Gurrin, Dublin City University (DCU), Irlande
• Björn Þór Jónsson, Université de Reykjavik, Islande
• Klaus Schöffmann, Université de Klagenfurt, Autriche

Back  Top

3-3-37(2024-09-18) CfP Special Session on 'Multimedia Indexing for eXtended Reality' at CBMI 2024, Reykjavik, Iceland

Call for Papers: Special Session on 'Multimedia Indexing for eXtended Reality' at CBMI 2024

https://cbmi2024.org/?page_id=100#MmIXR

21st International Conference on Content-based Multimedia Indexing (CBMI 2024).
18-20 September 2024, Reykjavik, Iceland - https://cbmi2024.org/

DESCRIPTION:
Extended Reality (XR) applications rely not only on computer vision for navigation and object placement but also require a range of multimodal methods to understand the scene or assign semantics to objects being captured and reconstructed. Multimedia indexing for XR thus encompasses methods for processes during XR authoring, such as indexing content to be used for scene and object reconstruction, as well as during the immersive experience, such as object detection and scene segmentation.
The intrinsic multimodality of XR applications involves new challenges like the analysis of egocentric data (video, depth, gaze, head/hand motion) and their interplay. XR is also applied in diverse domains, e.g., manufacturing, medicine, education, and entertainment, each with distinct requirements and data. Thus, multimedia indexing methods must be capable of adapting to the relevant semantics of the particular application domain.

TOPICS OF INTEREST:

  • Multimedia analysis for media mining, adaptation (to scene requirements), and description for use in XR experiences (including but not limited to AI-based approaches)

  • Processing of egocentric multimedia datasets and streams for XR (e.g., egocentric video and gaze analysis, active object detection, video diarization/summarization/captioning)

  • Cross- and multi-modal integration of XR modalities (video, depth, audio, gaze, hand/head movements, etc.)

  • Approaches for adapting multimedia analysis and indexing methods to new application domains (e.g., open-world/open-vocabulary recognition/detection/segmentation, few-shot learning)

  • Large-scale analysis and retrieval of 3D asset collections (e.g., objects, scenes, avatars, motion capture recordings)

  • Multimodal datasets for scene understanding for XR

  • Generative AI and foundation models for multimedia indexing and/or synthetic data generation

  • Combining synthetic and real data for improving scene understanding

  • Optimized multimedia content processing for real-time and low-latency XR applications

  • Privacy and security aspects and mitigations for XR multimedia content

     

IMPORTANT DATES:
Submission of papers: 22 March 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

SUBMISSION:
The session will be organized as an oral presentation session. The contributions to this session will be long papers describing novel methods or their adaptation to specific applications or short papers describing emerging work or open challenges.

SPECIAL SESSION ORGANISERS:
Fabio Carrara, Artificial Intelligence for Multimedia and Humanities Laboratory, ISTI-CNR, Pisa, Italy

Werner Bailer, Intelligent Vision Applications Group, JOANNEUM RESEARCH, Graz, Austria

Lyndon J. B. Nixon, MODUL Technology GmbH and Applied Data Science School at MODUL University, Vienna, Austria

Vasileios Mezaris, Information Technologies Institute / Centre for Research and Technology Hellas, Thessaloniki, Greece

Back  Top

3-3-38(2024-09-18) CfP Special Session on 'Multimodal Insights for Disaster Risk Management and Applications, (MIDRA)' at CBMI 2024, Reykjavik, Iceland

Call for Papers: Special Session on 'Multimodal Insights for Disaster Risk Management and Applications (MIDRA)' at CBMI 2024

https://cbmi2024.org/?page_id=100#MIDRA

21st International Conference on Content-based Multimedia Indexing (CBMI 2024).
18-20 September 2024, Reykjavik, Iceland - 
https://cbmi2024.org/

Disaster management in all its phases from preparedness, prevention, response, and recovery is in abundance of multimedia data, including valuable assets like satellite images, videos from UAVs or static cameras, and social media streams. The value of such multimedia data for operational purposes in disaster management is not only useful for civil protection agencies but also for the private sector that quantifies risk. Indexing data from crisis events presents Big Data challenges due to its variety, velocity, volume and veracity for effective analysis and retrieval.

The advent of deep learning and multimodal data fusion offers an unprecedented opportunity to overcome these challenges and fully unlock the potential of disaster event multimedia data. Through the strategic utilization of different data modalities, researchers can significantly enhance the value of these datasets, uncovering insights that were previously beyond reach, giving actionable information and supporting real-life decision-making procedures.

This special session actively seeks research papers in the domain of multimodal analytics and their applications in the context of crisis event monitoring through knowledge extraction and multimedia understanding. Emphasis is placed on recognizing the intrinsic value of spatial information when integrated with other data modalities.

The special session serves as a collaborative platform for communities focused on specific crisis events, such as forest fires, volcano unrest or eruption, earthquakes, floods, tsunamis and extreme weather events, which have increased significantly due to the climate crisis in our era. It fosters the exchange of ideas, methodologies, and software tailored to address challenges in these domains, aiming to encourage fruitful collaborations and the mutual enrichment of insights and expertise among diverse communities.

This special session includes presentation of novel research within the following domains:

  • Lifelog computing
  • Urban computing
  • Satellite computing and earth observation
  • Multimodal data fusion
  • Social media

Within these domains, the topics of interest include (but are not restricted to):

  • Multimodal analytics and retrieval techniques for crisis event multimedia data.
  • Deep learning and neural networks for interpretability, understanding, and explainability in artificial intelligence applied to natural disasters.
  • Satellite image analysis and fusion with in-situ data for crisis management.
  • Integration of multimodal data for comprehensive risk assessment.
  • Application of deep learning techniques to derive insights for risk mitigation.
  • Development of interpretative models for better understanding of risk factors.
  • Utilization of diverse data modalities (text, images, sensors) for risk management.
  • Implementation of multimodal analytics in predicting and managing natural disasters.
  • Application of multimodal insights in insurance risk assessment.
  • Enhanced decision-making through the fusion of geospatial and multimedia data.

Important Dates:
Submission of papers: 22 March 2024
Notification of acceptance: 3 June 2024
CBMI conference: 18-20 September 2024

Organisers:

  • Maria Pegia, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.
  • Ilias Gialampoukidis, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.
  • Ioannis Papoutsis, National Observatory of Athens & National Technical University of Athens, Greece.
  • Krishna Chandramouli, Venaka Treleaf GbR, Germany.
  • Stefanos Vrochidis, Information Technologies Institute / Centre for Research and Technology Hellas, Greece.

Please direct correspondence to midra@cbmi2024.org

Back  Top

3-3-39(2024-09-18) Special Session on 'Explainability in Multimedia Analysis' (ExMA)@ CBMI 2024, Reykjavik, Iceland

The 21st International Conference on Content-based Multimedia Indexing (CBMI 2024) will be held in Reykjavik, Iceland next September 18-20: https://cbmi2024.org/

The conference will bring together leading experts from academia and industry interested in the broad field of content-based multimedia indexing and applications.

The Special Session on 'Explainability in Multimedia Analysis' (ExMA), addresses the analysis of multimedia applications, such as person detection/tracking, face recognition or lifelog analysis, which may affect sensitive personal information. This raises both legal issues, e.g. concerning data protection and regulations in the ongoing European AI regulation, as well as ethical issues, related to potential bias in the system or misuse of these technologies. This special session focuses on AI-based explainability technologies in multimedia analysis.

The conference CBMI’2024 is supported by ACM SIGMM and the proceedings will be available at ACM Digital Library.

We would like to invite you to consider contributing a paper to this special session.

CBMI's important dates: https://cbmi2024.org/?page_id=211

Looking forward to see you at CBMI 2024.
With best regards,
Chiara Galdi

Special session organisers: Chiara Galdi, Martin Winter, Romain Giot, Romain Bourqui

Back  Top

3-3-40(2024-09-18) Special Session on' Content based Indexing for audio and music: from analysis to synthesis' @ CBMI 2024 , Reykjavik, Iceland.

The 21st International Conference on Content-based Multimedia Indexing (CBMI 2024) takes place September 18-20 in Reykjavik, Iceland.


We are delighted to have, as part of the conference, a Special Session on Audio entitled: Content based Indexing for audio and music: from analysis to synthesis 


Abstract: Audio has long been a key component of multimedia research. As far as indexing is concerned, the research and industrial context has changed drastically in the last 20 years or so. Today, applications of audio indexing range from karaoke applications to singing voice synthesis and creative audio design. This special session aims at bringing together researchers that aim at proposing new tools or paradigms to investigate audio and music processing in the context of indexation and corpus-based generation.


You are kindly encouraged to submit a paper related to the topic of the special session according to the CBMI guidelines : 

  • Regular full papers: 6 pages, plus additional pages for the list of references

  • Regular short papers: 4 pages, plus additional pages for the list of references


Important dates

  • March 22: Regular and special session paper submissions

  • June 3: Notification of acceptance 

  • Early July: Camera ready version of accepted papers



As of now, we already have 3 invited talks addressing the following topics : 

  • Cynthia C. S. Liem, Doğa Taşcılar, and Andrew M. Demetriou A quest through interconnected datasets: lessons from highly-cited ICASSP papers

  • Rémi Mignot, Geoffroy Peeters Learning invariance to sound modifications for music indexing and alignment

  • Cyrus Vahidi Large-scale music indexing for multimodal similarity search


Please join us in Reykjavik !!


Kindly yours,

François Pachet and Mathieu Lagrange

contact us: mathieu lagrange ls2n fr


Back  Top

3-3-41(2024-09-18)The 21st International Conference on Content-Based Multimedia Indexing — CBMI 2024, Reykjavik, Iceland

 

Last Call for Papers (with Final Deadline Extension) for the

21st International Conference on Content-Based Multimedia Indexing — CBMI 2024

September 18 – 20, 2024 in Reykjavik, Iceland

 

**** The CBMI 2024 submission deadline has been extended to April 12, 2024

**** The conference proceedings will be published by IEEE

 

After successful editions across Europe in France, Austria, Italy, UK, Czech Republic, and Hungary, the Content-Based Multimedia Indexing (CBMI) conference will take place in Reykjavík, Iceland this coming September 2024. CBMI aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualisation and analytics. We encourage contributions both on theoretical aspects and applications of CBMI in the new era of Artificial Intelligence.  Authors are invited to submit previously unpublished research papers highlighting significant contributions addressing these topics. In addition, special sessions on specific technical aspects or application domains are planned. 

 

Conference Website: http://cbmi2024.org/

 

The conference proceedings will be published by IEEE. Authors can submit full papers (6 pages + references), short papers (4 pages + references), special session papers (6 pages + references) and demonstration proposals (4 pages + 1 page demonstration description + references). Authors of high-quality papers accepted to the conference may be invited to submit extended versions of their contributions to a special journal issue in MTAP. Submissions to CBMI are peer reviewed in a single blind process. All types of papers must use the IEEE templates at https://www.ieee.org/conferences/publishing/templates.html. The language of the conference is English.

 

CBMI 2024 proposes eight special sessions:

  • AIMHDA: Advances in AI-Driven Medical and Health Data Analysis
  • Content-Based Indexing for Audio and Music: From Analysis to Synthesis
  • ExMA: Explainability in Multimedia Analysis
  • IVR4B: Interactive Video Retrieval for Beginners
  • MIDRA: Multimodal Insights for Disaster Risk Management and Applications
  • MmIXR: Multimedia Indexing for XR
  • Multimedia Analysis and Simulations for Digital Twins in the Construction Domain
  • Multimodal Data Analysis for Understanding of Human Behaviour, Emotions and their Reasons

 

Submission Deadlines

  • Full and short research papers are due April 12, 2024
  • Special session papers are due April 12, 2024
  • Demonstration submissions are due April 26, 2024

 

CBMI 2024 seeks contributions on the following research topics:

 

Multimedia Content Analysis and Indexing:

  • Media content analysis and mining
  • AI/ML approaches for content understanding
  • Multimodal and cross-modal indexing
  • Activity recognition and event-based multimedia indexing and retrieval 
  • Multimedia information retrieval (image, audio, video, text)
  • Conversational search and question-answering systems
  • Multimedia recommendation
  • Multimodal analytics, summarization, visualisation, organisation and browsing of multimedia content
  • Multimedia verification (e.g., multimodal fact-checking, deep fake analysis)
  • Large multimedia models, large language models and vision language models
  • Explainability in multimedia learning
  • Large scale multimedia database management
  • Evaluation and benchmarking of multimedia retrieval systems

 

Multimedia User Experiences:

  • Extended reality (AR/VR/MR) interfaces
  • Mobile interfaces
  • Presentation and visualisation tools
  • Affective adaptation and personalization
  • Relevance feedback and interactive learning

 

Applications of Multimedia Indexing and Retrieval:

  • Multimedia and sustainability
  • Healthcare and medical applications
  • Cultural heritage and entertainment applications
  • Educational and social applications
  • Egocentric, wearable and personal multimedia
  • Applications to forensics, surveillance and security
  • Environmental and urban multimedia applications
  • Earth observation and astrophysics

 

On behalf of the CBMI 2024 organisers,

Björn



—————— 
Björn Þór Jónsson (bjorn@ru.is)
Professor
Department of Computer Science
Reykjavik University (http://www.ru.is/)
Iceland

Back  Top

3-3-42(2024-09-20) 6th Int. Wkshop on the History of Speech Communication Research, Budapest, Hungary

Sixth International Workshop on the History of Speech Communication Research

September 20–21, 2024, Budapest

 

After highly popular sessions at ICPhS in Prague this year and an exceptional workshop „Lacerda 120” in Porto last year, we are happy to announce that the next HSCR workshop will take place in Budapest next year on Sept 20 and 21, organised by Judit Bóna and Mária Gósy of the Department of Applied Linguistics and Phonetics of ELTE University. The manuscript submission deadline is May 15, 2024. All details can be found at the workshop website: https://hscr2024.elte.hu/

The aim of this workshop is to bring scholars together who study the history of speech science to learn more on the methods, findings and results of our predecessors and to better understand the speech research community’s present achievements.

Speech has been investigated from different perspectives, which necessitates a range of approaches and scientific methods. Previous contributions analyzed the contextual background of individual researchers, investigated how specific research practices developed over time, examined the various kinds of approach of researchers to their material and the link between the form and the meaning in speech communication research.

The special focus of the 6th HSCR workshop will be on the development of the specific fields of speech communication, such as emerging phonology, progression in analysis of both speech sounds and prosody, speech technology, growing body of psycholinguistics, sociophonetics and clinical phonetics, etc. Researchers are encouraged to mine deep into history to find the early steps and advancement of these specific fields of speech communication. The knowledge of our predecessors is frequently unknown, forgotten or ignored for several reasons, and thus the past attainments are not appropriately integrated in our common consciousness regarding speech science.

As always, contributions on other topics from the history of speech communication research will also be welcome. The unfolded facts of the phonetic endeavor in the history of speech science may heavily inspire the present research.

Manuscripts should be sent to the email address of the workshop: hscr2024@gmail.com. Please, use the templates for your paper.

The proceedings will be published in the book series Studientexte zur Sprachkommunikation at TUDpress (Technical University Dresden). The HSCR proceedings will be published in print and also stored electronically in the ISCA archive.

For any inquiries, please use the workshop email address: hscr2024@gmail.com

Back  Top

3-3-43(2024-09-25) Second international multimodal communication symposium (MMSYM 2024), Goethe University, Frankfurt, Germany,

 

we are pleased to announce that the second international multimodal communication symposium (MMSYM 2024) will take place at Goethe University Frankfurt, Germany, on September 25 - 27, 2024!
Check the MMSYM website for more information and to stay up-to-date: http://mmsym.org
 
We are attaching the Call for Papers for MMSYM 2024 to this Email and invite you to submit abstracts of your multimodal work to the conference! MMSYM 2024 wants to emphasize the following three main research themes: (1) The gesture-speech integration, in particular the prosody-gesture link, (2) formal, automatic and machine-learning approaches to multimodality, and (3) psycholinguistic approaches in multimodal settings.
 
Abstracts can be submitted until March 8, 2024 via OpenReview. Please find more information about abstract submission, templates and guidelines on the MMSYM website.
 
Back  Top

3-3-44(2024-11-04)) Cf Wkshps, Special sessions and Grand Challenge @ICMI, Costa Rica
We are delighted to inform you that ICMI 2024 will be hosted in Latin America, specifically Costa Rica. The International Conference on Multimodal Interaction (ICMI) is the premier global platform for multidisciplinary research about multimodal human-human and human-computer interaction, interfaces, and system development. We extend an invitation to teams for the submission of proposals for the following components: 
 
- Workshops, deadline February 5th. 2024.
- Special Sessions, deadline February 2nd. 2024.
- Grand Challenge, deadline February 5th. 2024.
 
Workshops
=========
ICMI has established a tradition of hosting workshops concurrently with the main conference to facilitate discourse on new research, technologies, social science models, and applications. Recent workshops include themes like Media Analytics for Societal Trends, International Workshop on Automated Assessment of Pain (AAP), Face and Gesture Analysis for Health Informatics, Generation and Evaluation of Non-verbal Behaviour for Embodied Agents, Bridging Social Sciences and AI for Understanding Child Behavior, and more.
 
Interested parties are invited to submit a 3-page workshop proposal for evaluation. Workshops may span half or a full day, with accepted papers indexed by ACM Digital Library in an adjunct proceeding and a brief workshop summary published in the main conference proceedings. The Workshop submission deadline is February 5th, 2024. Proposals should be emailed to the workshop chairs Naveen Kumar and Hendrik Buschmeier to icmi2024-workshop-chairs@acm.org. For additional details, please visit the conference website: https://icmi.acm.org/2024/call-for-workshops/ 
 
 
Special Sessions
================
Special Sessions are vital in exploring emerging topics within multimodal interaction, contributing significantly to this year's conference program. We invite proposals to enrich the conference's diversity and provide valuable insights into the overarching theme, 'Equitability and Environmental Sustainability in Multimodal Interaction Technologies.' Interested teams are requested to submit the following:
 
- Title of the special session: the title is designed to appeal to the ICMI community and be self-explanatory.
- Aims and scope, elucidating why the ICMI community should engage with this session.
- Tentative Speakers, comprising a list of potential contributing authors with provisional presentation titles. Special sessions typically include 4 to 6 peer-reviewed papers.
- Organizers and Bios are emphasizing the relevance and experience of the speakers.
 
The deadline for Special Sessions submissions is February 2nd, 2024. Prospective organizers are encouraged to submit proposals via icmi2024-specialsession-chairs@acm.org. Further details can be found on the conference website: https://icmi.acm.org/2024/special-sessions/
 
 
Grand Challenge
=============== 
The ICMI community is keen on identifying optimal algorithms and their failure modes, which are crucial for developing systems capable of reliably interpreting human-human communication or responding to human input. We invite the ICMI community to define and address scientific Grand Challenges in our field, offering perspectives over the next five years as a collective. The ICMI Multimodal Grand Challenges aim to inspire innovative ideas and foster future collaborative endeavors in tasks such as analysis, synthesis, and interaction.
 
To participate, submit a 5-page proposal for expert evaluation, considering originality, ambition, feasibility, and implementation plans. Accepted proposals will be published in the conference's main proceedings. The Grand Challenge submission deadline is February 5th, 2024. Proposals should be emailed to both ICMI 2024 Multimodal Grand Challenge Chairs, Dr. Ronald Böck (Genie Enterprise) and Dr. Dinesh Babu JAYAGOPI (IIIT Bangalore), using icmi2024-challenge-chairs@acm.org  Additional information is on the conference website: https://icmi.acm.org/2024/call-for-grand-challenge/ 
 
We look forward to your valuable contributions and participation in ICMI 2024.
 
On behalf of the Organizers of ICMI 2024!
Back  Top

3-3-45(2024-11-05) The 26th International Conference on Multimodal Interaction (ICMI 2024), San Jose, Costa Rica
We cordially invite you to submit papers for the main track of the 26th International Conference on Multimodal Interaction (ICMI 2024). The 26th International Conference on Multimodal Interaction (ICMI 2024) will be held in San José, Costa Rica. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data, and systems. The study of social interactions encompasses both human-human interactions and human-computer interactions.  A unique aspect of ICMI is its multidisciplinary nature which values both scientific discoveries and technical modeling achievements, with an eye towards impactful applications for the good of people and society. 


https://icmi.acm.org/2024/call-for-papers/


Important Dates
Abstract deadline
April 26th, 2024
Paper Submission
May 3rd, 2024
Rebuttal Period June 16th-23rd, 2024
Paper notification July 18th, 2024
Camera-ready paper August 16th, 2024
Presenting at main conference November 5th-7th, 2024
 

Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2024 will need to be novel along one of the two dimensions:

  • Scientific Novelty: Papers should bring new scientific knowledge about human social interactions, including human-computer interactions. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
  • Technical Novelty: Papers should propose novelty in their computational approach for recognizing, generating or modeling multimodal data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach.

Commitment to ethical conduct is required and submissions must adhere to ethical standards in particular when human-derived data are employed. Authors are encouraged to read the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).

 
Theme
 

The theme of this year’s ICMI conference revolves around “Equitability and environmental sustainability in multimodal interaction technologies.” The focus is on exploring how multimodal systems and multimodal interactive applications can serve as tools to bridge the digital divide, particularly in underserved communities and countries, with a specific emphasis on those in Latin America and the Caribbean. The conference aims to delve into the design principles that can render multimodal systems more equitable and sustainable in applications such as health and education, thereby catalyzing positive transformations in development for historically marginalized groups, including racial/ethnic minorities and indigenous peoples. Moreover, there is a crucial exploration of the intersection between multimodal interaction technologies and environmental sustainability. This involves examining how these technologies can be crafted to comprehend, disseminate, and mitigate the adverse impacts of climate change, especially in the Latin America and Caribbean region. The conference endeavors to explore the potential of multimodal systems in fostering community resilience, raising awareness, and facilitating education related to climate change, thereby contributing to a holistic approach that encompasses both social and environmental dimensions.


Additional topics of interest include but are not limited to:

  • Affective computing and interaction
  • Cognitive modeling and multimodal interaction
  • Gesture, touch and haptics
  • Healthcare, assistive technologies
  • Human communication dynamics
  • Human-robot/agent multimodal interaction
  • Human-centered A.I. and ethics
  • Interaction with smart environment
  • Machine learning for multimodal interaction
  • Mobile multimodal systems
  • Multimodal behaviour generation
  • Multimodal datasets and validation
  • Multimodal dialogue modeling
  • Multimodal fusion and representation
  • Multimodal interactive applications
  • Novel multimodal datasets
  • Speech behaviours in social interaction
  • System components and multimodal platforms
  • Visual behaviours in social interaction
  • Virtual/augmented reality and multimodal interaction
Back  Top

3-3-46(2024-11-06) The 2nd International Conference on Foundation and Large Language Models (FLLM2024), Dubai, UAE

The 2nd International Conference on Foundation and Large Language Models (FLLM2024)

https://fllm2024.fllm-conference.org/index.php

26-29 November, 2024 | Dubai, UAE

Technically Co-Sponsored by IEEE UAE Section

FLLM 2024 CFP:

With the emergence of foundation models (FMs) and Large Language Models (LLMs) that are trained on large amounts of data at scale and adaptable to a wide range of downstream applications, Artificial intelligence is experiencing a paradigm revolution. BERT, T5, ChatGPT, GPT-4, Falcon 180B, Codex, DALL-E, Whisper, and CLIP are now the foundation for new applications ranging from computer vision to protein sequence study and from speech recognition to coding. Earlier models had a reputation of starting from scratch with each new challenge. The capacity to experiment with, examine, and comprehend the capabilities and potentials of next-generation FMs is critical to undertaking this research and guiding its path. Nevertheless, these models are currently inaccessible as the resources required to train these models are highly concentrated in industry, and even the assets (data, code) required to replicate their training are frequently not released due to their demand in the real-time industry. At the moment, mostly large tech companies such as OpenAI, Google, Facebook, and Baidu can afford to construct FMs and LLMS. Despite the expected widely publicized use of FMs and LLMS, we still lack a comprehensive knowledge of how they operate, why they underperform, and what they are even capable of because of their emerging global qualities. To deal with these problems, we believe that much critical research on FMs and LLMS would necessitate extensive multidisciplinary collaboration, given their essentially social and technical structure.

The International Conference on Foundation and Large Language Models (FLLM) addresses the architectures, applications, challenges, approaches, and future directions. We invite the submission of original papers on all topics related to FLLMs, with special interest in but not limited to:

  •     Architectures and Systems
    • Transformers and Attention
    • Bidirectional Encoding
    • Autoregressive Models
    • Massive GPU Systems
    • Prompt Engineering
    • Multimodal LLMs
    • Fine-tuning
  •     Challenges
    • Hallucination
    • Cost of Creation and Training
    • Energy and Sustainability Issues
    • Integration
    • Safety and Trustworthiness
    • Interpretability
    • Fairness
    • Social Impact
  •     Future Directions
    • Generative AI
    • Explainability and EXplainable AI
    • Retrieval Augmented Generation (RAG)
    • Federated Learning for FLLM
    • Large Language Models Fine-Tuning on Graphs
    • Data Augmentation
  •     Natural Language Processing Applications
    • Generation
    • Summarization
    • Rewrite
    • Search
    • Question Answering
    • Language Comprehension and Complex Reasoning
    • Clustering and Classification
  •     Applications
    • Natural Language Processing
    • Communication Systems
    • Security and Privacy
    • Image Processing and Computer Vision
    • Life Sciences
    • Financial Systems

Submissions Guidelines and Proceedings

Manuscripts should be prepared in 10-point font using the IEEE 8.5' x 11' two-column format. All papers should be in PDF format, and submitted electronically at Paper Submission Link. A full paper can be up to 8 pages (including all figures, tables and references). Submitted papers must present original unpublished research that is not currently under review for any other conference or journal. Papers not following these guidelines may be rejected without review. Also submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. Authors may contact the Program Chair for further information or clarification. All submissions are peer-reviewed by at least three reviewers. Accepted papers will appear in the FLLM Proceeding, and be published by the IEEE Computer Society Conference Publishing Services and be submitted to IEEE Xplore for inclusion. Submitted papers must include original work, and must not be under consideration for another conference or journal. Submission of regular papers up to 8 pages and must follow the IEEE paper format. Please include up to 7 keywords, complete postal and email address, and fax and phone numbers of the corresponding author. Authors of accepted papers are expected to present their work at the conference. Submitted papers that are deemed of good quality but that could not be accepted as regular papers will be accepted as short papers.

Important Dates:

  • Paper submission deadline: June 30, 2024
  • Notification of acceptance: September 15, 2024
  • Camera-ready Submission: October 10, 2024

 

Contact:

Please send any inquiry on FLLM to: info@fllm-conference.org

 

 

 
Back  Top

3-3-47(2024-11-25) 26th International Conference on Speech and Computer (SPECOM-2024), Belgrade, Serbia

*******************************************************

SPECOM-2024 – FIRST CALL FOR PAPERS

*******************************************************

 

26th International Conference on Speech and Computer (SPECOM-2024)

November 25-28, 2024

Crowne Plaza hotel, Belgrade, Serbia

Web: https://specom2024.ftn.uns.ac.rs/

 

ORGANIZERS

The conference SPECOM-2024 is organized by the Faculty of Technical Sciences University of Novi Sad and the School of Electrical Engineering University of Belgrade in cooperation with the Telecommunications Society of Serbia

 

FOUNDERS

SPECOM series was founded by St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences (SPIIRAS) of the St. Petersburg Federal Research Center of the Russian Academy of Sciences (SPC RAS)

 

CONFERENCE TOPICS

SPECOM attracts researchers, linguists and engineers working in the following areas of speech science, speech technology, natural language processing, human-computer interaction:

  • Affective computing

  • Audio-visual speech processing

  • Corpus linguistics

  • Computational paralinguistics

  • Deep learning for audio processing

  • Feature extraction

  • Forensic speech investigations

  • Human-machine interaction

  • Language identification

  • Large language models

  • Multichannel signal processing

  • Multilingual speech technology

  • Multimedia processing

  • Multimodal analysis and synthesis

  • Natural language generation

  • Natural language understanding

  • Sign language processing

  • Speaker diarization

  • Speaker identification and verification

  • Speech and language resources

  • Speech analytics and audio mining

  • Speech and voice disorders

  • Speech-based applications

  • Speech driving systems in robotics

  • Speech enhancement

  • Speech perception

  • Speech recognition and understanding

  • Speech synthesis

  • Speech translation systems

  • Spoken dialogue systems

  • Spoken language processing

  • Text mining and sentiment analysis

  • Virtual and augmented reality

  • Voice assistants

 

SATELLITE EVENTS

26th International Conference SPECOM will be organized together with the 32nd Telecommunications Forum TELFOR-2024: https://www.telfor.rs/en/

 

OFFICIAL LANGUAGE

The official language of the event is English. However, papers on processing of languages other than English are strongly encouraged.

 

FORMAT OF THE CONFERENCE

The conference program will include presentation of invited talks, oral presentations, and poster/demonstration sessions.

 

SUBMISSION OF PAPERS

Authors are invited to submit full papers of 10-15 pages formatted in the Springer LNCS style. Each paper will be reviewed by at least three independent reviewers (single-blind), and accepted papers will be presented either orally or as posters. Papers submitted to SPECOM must not be under review by any other conference or publication during the SPECOM review cycle, and must not be previously published or accepted for publication elsewhere. The authors are invited to submit their papers using the on-line submission system: https://easychair.org/conferences/?conf=specom2024

 

DEADLINES

July 01, 2024 ....................... Submission of full papers

September 03, 2024 ........... Notification of acceptance/rejection

September 15, 2024 ........... Camera-ready papers

October 01, 2024 ................ Early registration

 

PROCEEDINGS

SPECOM Proceedings will be published by Springer as a book in the Lecture Notes in Artificial Intelligence (LNAI/LNCS) series listed in all major international citation databases.

 

GENERAL CHAIRS

Vlado DELIĆ – Faculty of Technical Sciences University of Novi Sad, Novi Sad, Serbia

Alexey KARPOV – SPIIRAS, SPC RAS, St. Petersburg, Russia

 

CONTACTS

All correspondence regarding the conference should be addressed to SPECOM-2024 Secretariat

E-mail: specom2024@uns.ac.rs

Web: https://specom2024.ftn.uns.ac.rs

Back  Top

3-3-48(2024-xx-xx) Fearless Steps APOLLO Workshop.

We are pleased to extend an invitation to you to participate in the upcoming Fearless Steps APOLLO Workshop. Our workshop delves into exploring speech communication, technology, and the extensive audio of the historic NASA Apollo program.

 

The Fearless Steps APOLLO Community Resource, supported by NSF, is a unique and massive naturalistic communications resource. This resource, derived from the Apollo missions, offers a rare glimpse into team-based problem-solving in high-stakes environments, with a rich variety of speech and language data providing invaluable data for researchers, scientists, historians, and technologists.

 

The Fearless Steps APOLLO corpus contains 30 time-synchronized channels, which capture all NASA Apollo team communications. The PAO (Public Affairs Officer) channel reflects all live public broadcast TV/radio contexts streamed by NASA during the missions. This channel is similar to all Broadcast news corpora.

 

Our workshop aims to showcase featured speakers, panel discussions, and present the latest findings in speech and language processing. We will explore facets of the Fearless Steps APOLLO corpus, the largest publicly available naturalistic team-based historical audio and meta-data resource.

 

 

Topics Covered:

 

We will be exploring several key areas, including:

 

1. Big Data Recovery and Deployment in the Fearless Steps APOLLO initiative.

2. Applications in Education, History, and Archival efforts.

3. Insights into Communication Science and Psychology, particularly in Group Dynamics and Team Cohesion.

4. Speech and Language Technology (SLT) development, including ASR, SAD, speaker recognition, and conversational topic detection. 

 

Workshop Structure:

 

1. Discuss advancements in digitizing Apollo audio and machine learning solutions for audio diarization.

2. Explore team communication dynamics through speech processing.

3. Explore the utility of Fearless Steps APOLLO resource for: SpchTech (Speech & Language Technology), CommSciPsychTeam (Communication Sciences & Team-based Psychology), & EducArchHist (Education, History, & Archival) communities.

4. The FEARLESS STEPS Challenge, a community engagement and data generation initiative.

The workshop will feature oral talks, including an overview of Fearless Steps APOLLO resource, including Team presentations on systems evaluated for the Fearless Steps Challenge dataset.

 

 

Instructions for Authors:

 

We invite authors to submit a short 1-page research overview that involves the Fearless Steps APOLLO resource. Please submit your Abstracts through our dedicated portal.

The workshop format will include oral presentations for accepted abstracts, which will be announced after the paper submission. Submissions in the form of 1-page abstracts ( and an optional additional page for references, figures, or preliminary results) are encouraged. Detailed formatting instructions and sample PDFs are available on our website. The Complete Fearless Steps Challenge (Phase-1 to Phase-4) Corpora & Naturalistic (Apollo-11 & Apollo-13) corpora can be accessed by filling out a short survey form here: FS-APOLLO Corpora Download Access

 

 

The deadline for workshop Abstract submission is set for March 1, 2024. We will announce the acceptance of the Abstracts on March 15, 2024. Both in-person and remote participation options will be available, with a focus on fostering a collaborative environment. Papers accepted to ICASSP 2024 are welcome as Abstract submissions, as well as original research following our format guidelines.

 

We believe this workshop will be a pivotal step in advancing speech technology and research. We look forward to your participation in enriching the potential of the Apollo Resource and inspiring new approaches in collaborative problem-solving.

 

For more details, please visit our workshop website.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA