ISCA - International Speech
Communication Association


ISCApad Archive  »  2023  »  ISCApad #301  »  Events

ISCApad #301

Thursday, July 06, 2023 by Chris Wellekens

3 Events
3-1 ISCA Events
3-1-1(2023-08-20) Interspeech 2023 Dublin, Registration open

Registration Now Open

We are delighted to announce that the INTERSPEECH 2023 conference registration is now open with Early Bird reduced rates available until 9th June 2023. Along with the conference registration you can book your accommodationsocial eventstutorials, and satellite events. We strongly recommend that delegates visit the conference website https://www.interspeech2023.org/ to review each of the items above, before starting the registration process. 

Registration procedures and fees depend on whether you are a member of ISCA at the time of the conference. To participate in the conference, your ISCA membership must be valid during the entire conference (August 20-24, 2023). Those who have a valid membership until August 24th 2023, should select the members rate, all other delegates should select the non-member rate which also includes one year’s ISCA membership. The ISCA Member Listing service allows you to search in the ISCA membership database for one’s name, membership number and status for verification purpose. Click here for more details. 
 

 Registration Category Early Registration Fee
Available until 9th June 2023
Standard Registration Fee
Available from 10th June 2023
 
 ISCA Member Full
including Party at the Storehouse
€690 €790
ISCA Member Student / Retiree Full
including Party at the Storehouse
€420 €520
ISCA Non-Member Full
One Year Membership & Party at the Storehouse included
€790 €890
ISCA Non-Member Student / Retiree Full
One Year Membership & Party at the Storehouse included
€480 €580
 ISCA Member Light* €595  €695
 ISCA Member Student / Retiree Light* €345  €445
ISCA Non-Member Light
One Year Membership included*
€695  €795
 ISCA Non-Member Student / Retiree Light
 One Year Membership included*
€405  €505
*Light registrations do not include Party at the Storehouse tickets


Party at the Storehouse

Be sure not to miss this evening of craic agus ceol! (fun and music!).

The INTERSPEECH 2023 conference party is always a highlight of the conference, and this year it will be in the iconic Guinness Storehouse in Dublin, on Wednesday, 23rd August 2023. With return transfers departing from the Convention Centre Dublin and back to the official conference hotels included in your ticket, guests will have easy access to the dinner location. 

This is a unique social event where you can eat, relax, stroll through the Guinness exhibition floors, listen to music, dance, and enjoy the number one visitor attraction in Dublin. Guests will also have the opportunity to learn how to pour the perfect pint of Guinness!

Included are return bus transfers, buffet dinner, drinks, access to the Guinness Storehouse Experience and traditional Irish entertainment.
 



Exhibition and Sponsorship

Various sponsorship and exhibition packages, plus other support and partnership opportunities are available to book. 
Contact us to learn more or discuss a package to suit your requirements.
 
 Key Sponsorship Package Available 
   
 Exhibition - Book Your Stand
A range of sponsorship packages is available to suit a variety of budgets, business goals and levels of involvement. We are happy to work with you to customise a package that meets your specific needs. Contact us to learn more or discuss a package to suit your requirements. Read more..

 
  The INTERSPEECH 2023 exhibition will be open to all participants from Monday 21st August to Thursday 24th August. Poster sessions, all main catering, Welcome Reception and other networking points will be located in the exhibition hall, ensuring footfall through this area. Read more..

 
Closed Captioning - Sponsorship Opportunity
   
 Start-up Hub & innovation Area
In order to support the INTERSPEECH 2023 theme of ''Inclusive Spoken Language Science and Technology – Breaking Down Barrier'' and to make this year’s conference inclusive and accessible on a global scale, we are providing our industry partners with an opportunity to become the conference official Closed Captioning Sponsor. Read more..
  Start-ups are one of the driving forces behind innovation and adoption in all the most emerging categories of the speech technology. At INTERSPEECH 2023 we want to give start-ups the opportunity to showcase their innovative approaches in promotion of the theme of inclusion. Learn more..

 

Many thanks to our sponsors and supporters:
Sign up to our Newsletter
 
 
To ensure you are kept up to-to-date with news about INTERSPEECH 2023, please sign up to our newsletter through the conference website. Click here or visit www.interspeech2023.org
 


 

We look forward to welcoming you to Dublin to the 
24th INTERSPEECH conference from August 20th to 24th 2023!
 

 
Back  Top

3-1-2(2023-08-20) Show and Tell @Interspeech 2023


Show and Tell Submissions

INTERSPEECH is the world’s largest and most comprehensive conference on the science and technology of spoken language processing. An important addition to the regular and special sessions are the Show and Tell demonstrations, where participants are given the opportunity to present engaging and interactive demonstrations to conference attendees. Contributions must highlight scientific or technological innovations of a concept relevant to INTERSPEECH and may relate to a regular paper. Demonstrations should be based on innovations and fundamental research in the areas of speech communication, speech production, perception, acquisition, or speech and language technologies.

 


Important Dates

 

 Submission deadline

12th April 2023

 Acceptance/rejection notification

4th May 2023

Final paper and final video due

1st June 2023

 

 

 Special Sessions/Challenges

 

Satellite Sessions

Be sure to check our website regularly for details on Special Sessions/Challenges. For further information on Special Sessions/Challenges, please click the button below. 

 

We are delighted to announce a number of Satellite Sessions have been confirmed. For more information, please click the button below.

 
 

Exhibition and Sponsorship

Various sponsorship packages with exhibition and other support and partnership opportunities available to book. 

Book today to secure your stand. Contact us to learn more or discuss a package to suit your requirements.

 

Many thanks to our sponsors and supporters:

 



Sign up to our Newsletter

To ensure you are kept up to-to-date with news about INTERSPEECH 2023, please sign up to our newsletter through the conference website. Click here or visit www.interspeech2023.org

 



We look forward to welcoming you to Dublin to the 
24th INTERSPEECH conference from August 20th to 24th 2023!

 



Contact Us

 

 

registration@interspeech2023.org

 

+353 1 400 3611

 

www.interspeech2023.org

 

 
 
Back  Top

3-1-3(2024-07-02) 12th Speech Prosody Conference @Leiden, The Netherlands

Dear Speech Prosody SIG Members,

 

Professor Barbosa and I are very pleased to announce that the 12th Speech Prosody Conference will take place in Leiden, the Netherlands, July 2-5, 2024, and will be organized by Professors Yiya Chen, Amalia Arvaniti, and Aoju Chen.  (Of the 303 votes cast, 225 were for Leiden, 64 for Shanghai, and 14 indicated no preference.) 

 

Also I'd like to remind everyone that nominations for SProSIG officers for 2022-2024 are being accepted still this week, using the form at http://sprosig.org/about.html, to Professor Keikichi Hirose.  If you are considering nominating someone, including yourself, feel free to contact me or any current officer to discuss what's involved and what help is most needed.

 

Nigel Ward, SProSIG Chair

Professor of Computer Science, University of Texas at El Paso

CCSB 3.0408,  +1-915-747-6827

nigel@utep.edu    https://www.cs.utep.edu/nigel/   

 

 

Back  Top

3-1-4(2024-09-01) Interspeech 2024, Jerusalem, Israel.

 


Back  Top

3-1-5(2025-08-217) Interspeech 2025, Rotterdam, The Netherlands

INTERSPEECH 2025
Rotterdam, The Netherlands, 17-22 August 2025
Chairs: Odette Scharenborg, Khiet Truong and Catha Oertel
26th INTERSPEECH event

Back  Top

3-1-6(2026) Interspeech 2026, Australia

The Australasian Speech Science and Technology Association is honoured to have been selected to host INTERSPEECH 2026. Our theme of Diversity & Equity ? Speaking Together strongly reflects Sydney and our broader region. Sydney is Oceania?s largest city and is also its most linguistically diverse: more than 300 different languages are spoken and 40% of Sydneysiders speak a language other than English at home. Consistent with the goals of ISCA ?to promote, in an international world-wide context, activities and exchanges in all fields related to speech communication science and technology?, INTERSPEECH Sydney will highlight the diversity of research in our field with a firm focus on equity and inclusivity. Recognizing the importance of multi-dimensional approaches to speech, INTERSPEECH 2026 will foster greater interdisciplinarity to better inform current and future work on speech science and technology. We look forward to welcoming all to Sydney!


Back  Top

3-1-7ISCA INTERNATIONAL VIRTUAL SEMINARS

 

Now's the time of year that seminar programmes get fixed up.. please direct the attention of whoever organises your seminars to the ISCA INTERNATIONAL VIRTUAL SEMINARS scheme (introduction below). There is now a good choice of speakers:  see

 

https://www.isca-speech.org/iscaweb/index.php/distinguished-lecturers/online-seminars

ISCA INTERNATIONAL VIRTUAL SEMINARS

A seminar programme is an important part of the life of a research lab, especially for its research students, but it's difficult for scientists to travel to give talks at the moment. However,  presentations may be given on line and, paradoxically, it is thus possible for labs to engage international speakers who they wouldn't normally be able to afford.

ISCA has set up a pool of speakers prepared to give on-line talks. In this way we can enhance the experience of students working in our field, often in difficult conditions. To find details of the speakers,

  • visit isca-speech.org
  • Click Distinguished Lecturers in the left panel
  • Online Seminars then appears beneath Distinguished Lecturers: click that.

Speakers may pre-record their talks if they wish, but they don't have to. It is up to the host lab to contact speakers and make the arrangements. Talks can be state-of-the-art, or tutorials.

If you make use of this scheme and arrange a seminar, please send brief details (lab, speaker, date) to education@isca-speech.org

If you wish to join the scheme as a speaker, we need is a title, a short abstract, a 1 paragraph biopic and contact details. Please send them to education@isca-speech.org


PS. The online seminar scheme  is now up and running, with 7 speakers so far:

 

Jean-Luc Schwartz, Roger Moore, Martin Cooke, Sakriani Sakti, Thomas Hueber, John Hansen and Karen Livescu.



Back  Top

3-1-8Speech Prosody courses

Dear Speech Prosody SIG Members,

We would like to draw your attention to three upcoming short courses from the Luso-Brazilian Association of Speech Sciences:

- Prosody & Rhythm: applications to teaching rhythm,
  Donna Erickson (Haskins), March 16, 19, 23 and 26

- Prosody, variation and contact,
  Barbara Gili Fivela (University of Salento, Italy), April 19, 21, 23, 26 and 28

- Rhythmic analysis of languages: main challenges,
  Marisa Cruz (University of Lisbon), June 2, 3, 4, 7, 8 and 10

For details:
  http://www.letras.ufmg.br/padrao_cms/index.php?web=lbass&lang=2&page=3670&menu=&tipo=1
 
 
 
Plinio Barbosa and Nigel Ward

Back  Top

3-2 ISCA Supported Events
3-2-1(2023-08-28) DiSS Workshop 2023 (Disfluency in Spontaneous Speech), Bielefeld, GE

We are happy to announce that the DiSS Workshop 2023 (Disfluency in Spontaneous Speech)
will take place in Bielefeld (Germany) August 28-30. It will include also a special day
on laughter and other non-verbal vocalizations, to connect disfluency research to
research on related phenomena. Submissions are encouraged from all fields that deal with
these phenomena, including: psychology, neuropsychology and neurocognition,
psycholinguistics, linguistics, conversation analysis, computational linguistics, speech
technology, gesture analysis, dialogue systems, speech production and perception.

Paper submission will be handled via the CMT platform. Three student grants for attending
the conference will be made available through ISCA. Please check the workshop website:
https://tinyurl.com/diss2023 for further information.

Important Dates:

Paper submission deadline: 31 March 2023
Notification of acceptance: 15 May 2023
Author registration deadline: 16 June 2023
Registration deadline: 28 July 2023
DiSS workshop: 28-30 August 2023

Keynote Speakers:

Ludivine Crible (Ghent University, Belgium)
Jürgen Trouvain (Saarland University, Germany)


Looking forward to seeing you in Bielefeld,
Simon Betz, Bogdan Ludusan and Petra Wagner

Back  Top

3-3 Other Events
3-3-1(2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA

MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
http://www.mldm.de
 
When    Jul 16, 2023 - Jul 21, 2023
Where    New York, USA
Submission Deadline    Jan 15, 2023
Notification Due    Mar 18, 2023
Final Version Due    Apr 5, 2023
Categories:    machine learning   data mining   pattern recognition   classification
 
Call For Papers
MLDM 2023
18th International Conference on Machine Learning and Data Mining
July 15 - 19, 2023, New York, USA

The Aim of the Conference
The aim of the conference is to bring together researchers from all over the world who deal with machine learning and data mining in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome.

Chair
Petra Perner Institute of Computer Vision and Applied Computer Sciences IBaI, Germany

Program Committee
Piotr Artiemjew University of Warmia and Mazury in Olsztyn, Poland
Sung-Hyuk Cha Pace Universtity, USA
Ming-Ching Chang University of Albany, USA
Mark J. Embrechts Rensselaer Polytechnic Institute and CardioMag Imaging, Inc, USA
Robert Haralick City University of New York, USA
Adam Krzyzak Concordia University, Canada
Chengjun Liu New Jersey Institute of Technology, USA
Krzysztof Pancerz University Rzeszow, Poland
Dan Simovici University of Massachusetts Boston, USA
Agnieszka Wosiak Lodz University of Technology, Poland
more to be annouced...


Topics of the conference

Paper submissions should be related but not limited to any of the following topics:

Association Rules
Audio Mining
Autoamtic Semantic Annotation of Media Content
Bayesian Models and Methods
Capability Indices
Case-Based Reasoning and Associative Memory
case-based reasoning and learning
Classification & Prediction
classification and interpretation of images, text, video
Classification and Model Estimation
Clustering
Cognition and Computer Vision
Conceptional Learning
conceptional learning and clustering
Content-Based Image Retrieval
Control Charts
Decision Trees
Design of Experiment
Desirabilities
Deviation and Novelty Detection
Feature Grouping, Discretization, Selection and Transformation
Feature Learning
Frequent Pattern Mining
https://www.youtube.com/watch?v=7a35Wjkygx0&ab_channel=IFOSS

 

The underlying theme of the current edition is

*Digitalisation and Forensic Data Science: From evidence acquisition to interpretation*.

 

LIST OF SPEAKERS (almost confirmed)

Fabio Bruno, Interpol, Singapore

Didier Meuwly - University of Twente, NL

Matthew Stamm, Drexel University, USA

Giovanni Tessitore - Polizia Scientifica, IT

Christian Reiss - Friedrich-Alexander-Universität, DE

 

..others coming soon.

 

DIRECTORS

Sebastiano Battiato - University of Catania, Italy

Donatella Curtotti -  University of Foggia, Italy

Giovanni Ziccardi, University of Milan, Italy

 

PhD FORUM

A special session is organized for participants who intend to take advantage of the audience for  resenting their current research/tool in the area. Moreover this year, three prizes sponsored by Amazon AWS will be awarded (AWS credits to the top 3 students for a total value of $3500). Students will be selected by the scientific committee on the basis of their CV and the presentations to be given during the demo poster session.

 

APPLICATION

The school will be open to about 75 qualified, motivated and pre-selected

candidates. Ph. D. students, post-docs, young researchers (both academic and

industrial), senior researchers (both academic and industrial) or

academic/industrial professionals are encouraged to apply at: www.ifoss.it

<http://www.ifoss.it/>

 

The expected school fee will be of 550 euros for Master and Phd students granted by academia, € 600 for other academic positions and € 700 for industrial. Reduced Fee will be reserved to LEAs, private lawyers and practitioners 400 Euros. The fee will include all course materials, coffee breaks, bus service from Catania Airport to School Location and return, WiFi Internet Connection, a guided tour, a social dinner and all the events scheduled in the programme.

 

A certain number of scholarships will be available soon depending on sponsorship income.

 

Applications to attend IFOSS 2023 should be received before 07/05/2023.

Applicants will receive notification of acceptance by mid of May.

Late registration can be done with an extra payment of € 100.

 

ACCOMODATIONS

IFOSS participants will be hosted at Hotel Village Baia Samuele (school location) at very special rates. There are no other accommodation options. IFOSS 2023 participants must make reservations for accommodation, using the accommodation reservation form (available soon) to be sent directly to Baia Samuele reception. 
 

More details at https://www.ifoss.it/accommodation/

 

After a certain date there is no guarantee for reservations in Hotel Village Baia Samuele.

More information will be announced as soon as possible on the web site.

Depending on chosens settings (Single, Double or Triple Room) the overall cost enclosing Breakfast, Lunch and Dinner should span in the range (600-1000 euros) in the period 16(in) to 22 (out) July 2023.

 

LOCATION OF IFOSS 2023

IFOSS 2023 will be hosted by Hotel Village Baia Samuele in Punta Sampieri - Scicli (Ragusa), Sicily from 16-22 July 2023.

Sicily is one of the most beautiful islands of the Mediterranean. The island is very rich in archeological sites from various Ancient Civilizations. The sea, weather, food and the wine are excellent. In particular Punta Sampieri - Scicli (RG) is located in the south east of Sicily in a late Baroque area called Val di Noto. The Val di Noto area is included in the Unesco World Heritage List and

includes eight nearby towns: Caltagirone, Militello Val di Catania, Catania, Modica, Noto, Palazzolo, Ragusa and Scicli.

The location of the school rises in the middle of an ample bay delimited on the west from Sampieri and on the east from a cliff, on which is founded an ancient furnace, rare example of industrial archaeology. The Hotel Village Baia Samuele stretches in a gentle slant to the beach: 120 thousand square meters delimited from rows of secular cypresses. An ultramodern village with an original architecture, pleasant design and all comforts you can imagine. The frame of plants and flowers, typical of this angle of Sicily, in front of the island of Malta, completes this gilded dream of the Mediterranean.

 

MORE INFORMATION

www.ifoss.it

info@ifoss.it

 

FOLLOW US ON

Facebook: https://www.facebook.com/InternationalForensicsSummerSchool/
Twitter: https://twitter.com/IFOSS22
Instagram: https://www.instagram.com/ifoss_official/ LinkedIn: https://www.linkedin.com/company/ifoss/

Back  Top

3-3-3(2023-07-XX) Track 4: Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems - Eleventh Dialog System Technology Challenge (DSTC11.T4)
Track 4: Robust and Multilingual Automatic Evaluation Metrics for Open-Domain Dialogue Systems - Eleventh Dialog System Technology Challenge (DSTC11.T4)

Call for Participation

TRACK GOALS AND DETAILS: Two main goals and tasks:
•    Task 1: Propose and develop effective Automatic Metrics for evaluation of open-domain multilingual dialogs.
•    Task 2: Propose and develop Robust Metrics for dialogue systems trained with back translated and paraphrased dialogs in English.

EXPECTED PROPERTIES OF THE PROPOSED METRICS:
•    High correlation with human annotated assessments.
•    Explainable metrics in terms of the quality of the model-generated responses.
•    Participants can propose their own metric or optionally improve the baseline evaluation metric deep AM-FM (Zhang et al, 2020).

DATASETS:
For training: Up to 18 Human-Human curated multilingual datasets (+3M turns), with turn/dialogue level automatic annotations as toxicity or sentiment analysis, among others.
Dev/Test: Up to 10 Human-Chatbot curated multilingual datasets (+150k turns), with turn/dialogue level human annotations including QE metrics or cosine similarity.
Data translated and back-translated into several languages (English, Spanish and Chinese). Also, there are several paraphrases with annotations for each dataset.

BASELINE MODEL:
The default choice is Deep AM-FM (Zhang et al, 2020). This model has been adapted to be able to evaluate multilingual datasets, as well as to work with paraphrased and back translated sentences.

REGISTRATION AND FURTHER INFORMATION:
ChatEval: https://chateval.org/dstc11
GitHub: https://github.com/Mario-RC/dstc11_track4_robust_multilingual_metrics

PROPOSED SCHEDULE:
Training/Validation data release: From November to December in 2022
Test data release: Middle of March in 2023
Entry submission deadline: Middle of March in 2023
Submission of final results: End of March in 2023
Final result announcement: Early of April in 2023
Paper submission: From March to May in 2023
Workshop: July-September/2023 in a venue to be announced with DSTC11

ORGANIZATIONS:
Universidad Politécnica de Madrid (Spain)
National University of Singapore (Singapore)
Tencent AI Lab (China)
New York University (USA)
Carnegie Mellon University (USA)
Back  Top

3-3-4(2023-08-07) 20th International Congress of the Phonetic Sciences (ICPhS), Prague, Czech Republic

We would like to welcome you to Prague for the 20th International Congress of the Phonetic Sciences (ICPhS), which takes place on August 7–11, 2023, in Prague, Czech Republic.

 

ICPhS takes place every four years, is held under the auspices of the International Phonetic Association and provides an interdisciplinary forum for the presentation of basic and applied research in the phonetic sciences. The main areas covered by the Congress are speech production, speech acoustics, speech perception, speech prosody, sound change, phonology, sociophonetics, language typology, first and second language acquisition, forensic phonetics, speaking styles, voice quality, clinical phonetics and speech technology.

 

We invite papers on original, unpublished research in the phonetic sciences. The theme of the Congress is “Intermingling Communities and Changing Cultures”. Papers related to this theme are especially encouraged, but we welcome papers related to any of the Congress’ scientific areas. The deadline for abstract submission is December 1, 2002, and for full-paper submission December 8, 2022.

 

We also invite proposals for special sessions covering emerging topics, challenges, interdisciplinary research, or subjects that could foster useful debate in the phonetic sciences. The submission deadline is May 20, 2022.

 

All information is available at https://www.icphs2023.org/, where it is also possible to register for email notifications concerning the congress.

 

Contact: icphs2023@guarant.cz

 

Back  Top

3-3-5(2023-08-07) IPA bursaries for ICPhS

The president of the IPA, Michael Ashby, would like to call attention to the IPA's generous scheme of student awards and travel bursaries for ICPhS. He hopes that many of us will encourage our students to apply.

https://www.internationalphoneticassociation.org/news/202210/ipa-student-awardstravel-bursaries-and-g%C3%B6sta-bruce-scholarships-icphs-2023 <https://www.internationalphoneticassociation.org/news/202210/ipa-student-awardstravel-bursaries-and-g%C3%B6sta-bruce-scholarships-icphs-2023>

Back  Top

3-3-6(2023-08-18) SIGUL 2023 Workshop@ Interspeech 2023, Dublin, Ireland

 

2nd Call for Papers

SIGUL 2023 Workshop


Co-located with Interspeech 2023

Dublin, Ireland, 18-20 August 2023



The 2nd Annual Meeting of the ELRA/ISCA Special Interest Group on Under-Resourced Languages (SIGUL 2023) provides a forum for the presentation and discussion of cutting-edge research in text and speech processing for under-resourced languages by academic and industry researchers. SIGUL 2023 carries on the tradition of the SIGUL and the CCURL-SLTU (Collaboration and Computing for Under-Resourced Languages – Spoken Language Technologies for Under-resourced languages) Workshop Series, which has been organized since 2008 and, as LREC Workshops, since 2014. As usual, this workshop will span the research interest areas of less-resourced, under-resourced, endangered, minority, and minoritized languages.


Special Features


This year, the workshop will be marked with three special events:


(1) Special Session in Celtic Language Technology (August 18)

SIGUL 2023 will provide a special session or forum for researchers interested in developing language technologies for Celtic languages.


(2) Joint Session with SlaTE 2023 (August 19)

SIGUL 2023 will have a joint session with The 9th Workshop on Speech and Language Technology in Education (SlaTE 2023). The goal is to accelerate the development of spoken language technology for under-resourced languages through education.


(3) Social outing and dinner near Dublin (optional on August 20)


Invited Speakers


  • Subhashish Panigrahi, O Foundation and Law for All Initiative: Reclaiming Our Voices - Imagining Community-Led Ai/Ml Practices

  • Delyth Prys, Language Technologies Unit, Canolfan Bedwyr: TBA


Workshop Topics 


Following the long-standing series of previous meetings, the SIGUL venue will provide a forum for the presentation of cutting-edge research in natural language processing and spoken language processing for under-resourced languages to both academic and industry researchers and also offer a venue where researchers in different disciplines and from varied backgrounds can fruitfully explore new areas of intellectual and practical development while honoring their common interest of sustaining less-resourced languages.


Topics include but are not limited to:


  • Processing any under-resourced languages (covering less-resourced, under-resourced, endangered, minority, and minoritized languages)

  • Cognitive and linguistic studies of under-resourced languages

  • Fast resources acquisition: text and speech corpora, parallel texts, dictionaries, grammars, and language models

  • Zero-resource speech technologies and self-supervised learning

  • Cross-lingual and multilingual acoustic and lexical modeling

  • Speech recognition and synthesis for under-resourced languages and dialects

  • Machine translation and spoken dialogue systems

  • Applications of spoken language technologies for under-resourced languages


  • Special topic: 

    • Celtic language technology

    • Spoken language technologies for under-resourced languages via education


We also welcome various typologies of papers:


  • research papers;

  • position papers for reflective considerations of methodological, best practice, institutional issues (e.g., ethics, data ownership, speakers’ community involvement, de-colonizing approaches);

  • research posters for work-in-progress projects in the early stage of development or description of new resources;

  • demo papers, and early-career/student papers, to be submitted as extended abstracts and presented as posters.


Instructions for Submission


Prospective authors are invited to submit their contributions according to the following guidelines.

  • Research and position papers: a maximum of 5 pages with the 5th page reserved exclusively for references.

  • Demo papers, and early-career/student papers: a maximum of three pages with the 3rd page reserved for references. 


Both types of submissions must conform to the Interspeech format defined in the paper preparation guidelines as instructed in the author’s kit on the Interspeech webpage. Papers do not need to be anonymous. Authors must declare that their contributions are original and that they have not submitted their papers elsewhere for publication.


Important Dates


- Paper submission deadline: 28 May 2023

- Notification of acceptance: 2 July 2023 

- Camera-ready paper: 21 July 2023 

- Workshop date: 18-20 August 2023



Outline of the Program


SIGUL 2023 will continue the tradition of the previous SIGUL event that features a number of distinguished keynote speakers, technical oral and poster sessions, and panel discussions to discuss a better future for under-resourced languages and under-resourced communities. 


Full list of organizers SIGUL Board

Sakriani Sakti (JAIST, Japan)

Claudia Soria (CNR-ILC, Italy)

Maite Melero (Barcelona Supercomputing Center, Spain)


SIGUL 2023 Organizers

Kolawole Adebayo (ADAPT, Ireland)

Ailbhe Ní Chasaide (Trinity College Dublin, Ireland)

Brian Davis (ADAPT, Ireland)

John Judge (ADAPT, Ireland)

Maite Melero (Barcelona Supercomputing Center, Spain)

Sakriani Sakti (JAIST, Japan)

Claudia Soria (CNR-ILC, Italy)


SIGUL 2023 Program Committee

Gilles Adda (LIMSI/IMMI-CNRS, France)

Manex Agirrezabal (University of Copenhagen – Center for Sprogteknologi | Center for Language Technology, Denmark)

Shyam S. Agrawal (KIIT, India)

Begona Altuna (Euskal Herriko Unibertsitatea | University of the Basque Country, Spain)

Steven Bird (Charles Darwin University, Australia)

Matt Coler (University of Groningen, Campus Fryslân, The Netherlands)

Pradip K. Das (IIT, India)

Iria De Dios Flores ( Centro Singular de Investigación en Tecnoloxías Intelixentes, Spain)

A. Seza Doğruöz (Universiteit Gent, België | Ghent University, Belgium)

Stefano Ghazzali (Prifysgol Bangor | Bangor University, Bangor, Gwynedd) 

Jeff Good (University at Buffalo, USA)

Kristiina Jokinen (AIRC [Artificial Intelligence Research Center], AIST Tokyo Waterfront, Japan)

Laurent Kevers (Università di Corsica Pasquale Paoli, France)

Teresa Lynn (Mohamed bin Zayed University of Artificial Intelligence, United Arab Emirates)

Joseph Mariani (LIMSI-CNRS, France)

Maite Melero (Barcelona Supercomputing Center, Espanya | Spain)

Win Pa Pa (UCS Yangon, Myanmar)

Delyth Prys (Prifysgol Bangor | Bangor University, Bangor, Gwynedd) 

Carlos Ramisch (Université Marseille, France)

Sakriani Sakti (JAIST, Japan)

Claudia Soria (CNR-ILC, Italia | Italy)

Trond Trosterud (Norges Arktiske Universitet | The Arctic University of Norway)

 

Acknowledgments

SIGUL is a joint Special Interest Group of the ELRA Language Resources Association (ELRA) and of the International Speech Communication Association (ISCA). The SIGUL 2023 workshop has been organized with the help of the local organizers of Interspeech 2023 and Slate 2023. This edition has been sponsored by Google and endorsed by Linguapax International. 

 

 

Contact


To contact the organizers, please mail sigul2023@ml.jaist.ac.jp (Subject: [SIGUL2023]).

Back  Top

3-3-7(2023-08-20) Special session at Interspeech 2023 on DIarization of SPeaker and LAnguage in Conversational Environments [DISPLACE] Challenge.

We would like to bring to your notice the launch of the special session at Interspeech 2023 on DIarization of SPeaker and LAnguage in Conversational Environments [DISPLACE] Challenge.  

 

The DISPLACE challenge entails a first of kind task to perform speaker and language diarization on the same data, as the data contains multi-speaker social conversations in multilingual code-mixed speech. In multilingual communities, social conversations frequently involve code-mixed and code-switched speech. In such cases, various speech processing systems need to perform the speaker and language segmentation before any downstream task. The current speaker diarization systems are not equipped to handle multi-lingual conversations, while the language recognition systems may not be able to handle the same talker speaking in multiple languages within the same recording. 


With this motivation, the DISPLACE challenge attempts to benchmark and improve Speaker Diarization (SD) in multilingual settings and Language Diarization (LD) in multi-speaker settings, using the same underlying dataset. For this challenge, a natural multi-lingual, multi-speaker conversational dataset will be distributed for development and evaluation purposes. There will be no training data given and the participants will be free to use any resource for training the models. The challenge reflects the theme of Interspeech 2023 - 'Inclusive Spoken Language Science and Technology – Breaking Down Barriersin its true sense.  

 

Registrations are open for this challenge which will contain two tracks - a) Speaker diarization track and b) Language diarization track. 

 

A baseline system and an open leaderboard is available to the participants. The DISPLACE challenge is split into two phases, where the first phase is linked to the Interspeech paper submission deadline, while the second phase aligns with the camera ready submission deadline. For more details, dates and to register, kindly visit the DISPLACE challenge website: https://displace2023.github.io/

 

We look forward to your team challenging to 'displace' the state-of-the-art in speaker, language diarization. 

 

Thank you and Namaste,

The DISPLACE team 

 

 

 

 

 

 

Back  Top

3-3-8(2023-08-26) CfP 12th Speech Synthesis Workshop - Grenoble-France
CfP 12th Speech Synthesis Workshop - Grenoble-France - https://ssw2023.org - August 26-28, 2023:
 The Speech Synthesis Workshop (SSW) is the main meeting place for research and innovation in speech synthesis, i.e. predicting speech signals from text input. SSW welcomes contributions not only in the core TTS technology but also papers from contributing sciences: from phoneticians, phonologists, linguists, neuroscientists to experts of multimodal human-machine interaction.
 For more information, please consult: https://ssw2023.org/
 Deadlines:
  • 26 April, 2023 Initial paper submission (at least, title, authors and abstract)
  • 3 May, 2023 Final paper submission (only updates to the PDF are allowed)
 Note also that the data for Blizzard challenge 2023 on French have been releasedhttps://www.synsig.org/index.php/Blizzard_Challenge_2023
 Deadlines:
  • 5 March 2023 Team registration closes
 

The decisions about SSW2023 papers have been just released. From 45 submitted papers, 37 were accepted and 8 rejected. With these high-quality papers, 3 invited talks, a round-table on 'ethics & generative AI-driven systems' and two memorable social events, the workshop sounds exciting! We look forward to welcoming you in Grenoble after Interspeech!

  The registration is now open: https://ssw2023.org/index.php/registration. Early bird registration is till July 12th.


  The LBR submission is also open till June 28th.

  You failed at ICASSP, Interspeech or SSW? You have on-going exciting, cutting-edge or experimental research? You are a Master student or PhD and want to present early-stage results? The Late-Breaking Reports (LBR) of SSW provide authors with the opportunity to present early-stage results and new ideas over exciting, cutting-edge or experimental research. LBRs are an excellent opportunity for researchers (e.g. results of a Master thesis) new to the field to participate in the conference. LBRs should be max 2 pages (including figures and references).

  Please find the latex style in : LBR_SSW2023. Send your .pdf submission as attached file to: organisation@ssw.org with object: 'LBR submission' !

  All LBR submissions will undergo screening by the organizing committee. Accepted LBRs will be presented in a dedicated poster session. They will be assembled and published in a companion proceeding.

Back  Top

3-3-9(2023-08-29) Blizzard Challenge 2023
We are delighted to announce the call for participation in the Blizzard Challenge 2023. This is an open evaluation of corpus-based speech synthesis systems using common datasets and a large listening test.

This year, the challenge will provide a French dataset from two native speakers. The two tasks involve building voices from this data. Please read the full announcement and the rules at:

Please register by following the instructions on the web page.
Important: please send all communications about Blizzard to the official address blizzard-challenge-organisers@googlegroups.com and not to our personal addresses.


Please feel free to distribute this announcement to other relevant mailing lists.

Olivier Perrotin & Simon King
Back  Top

3-3-10(2023-08-30) CfP Sixth IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2023), Singapore

*********************************************
*** Submission Deadline: 19 April 2023 PST ***
*********************************************

CALL FOR PAPERS

Sixth IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2023)
30 August - 1 September 2023, Singapore
http://ieee-mipr.org/


The Sixth IEEE International Conference on Multimedia Information Processing
and Retrieval (IEEE MIPR 2023) will take place both physically and virtually,
August 30 ? September 1, 2023, in Singapore. The conference will provide a
forum for original research contributions and practical system design,
implementation, and applications of multimedia information processing and
retrieval.

Topics (Please see http://ieee-mipr.org/call_papers.html).
Topics of interest include, but are not limited to:

1. Multimedia Retrieval
2. Machine Learning/Deep Learning/Data Mining
3. Content Understanding and Analytics
4. Multimedia and Vision
5. Networks for Multimedia Systems
6. Systems and Infrastructures
7. Data Management
8. Novel Applications
9. Internet of Multimedia Things
and others.

Paper Submission:

The conference will accept regular papers (6 pages), short papers (4 pages),
and demo papers (4 pages). Authors are encouraged to compare their approaches,
qualitatively or quantitatively, with existing work and explain the strength
and weakness of the new approaches. We are planning to invite selected
submissions to journal special issues.
Instructions and a link to the submission website are available here:
https://cmt3.research.microsoft.com/MIPR2023

Important Dates (http://ieee-mipr.org/dates.html):
  - Regular Paper (6 pages) and Short Paper (4 pages) Submission Due: April 19, 2023
  - Notification of Decision: May 25, 2023
  - Camera-ready deadline: July 10, 2023
  - Conference Date: August 30-Sep 1, 2023
--
* Ichiro IDE                                           ide@i.nagoya-u.ac.jp  *
* Nagoya University, Graduate School of Informatics                         *
*                       / Mathematical & Data Science Center                 *
*        Phone/Facsimile: +81-52-789-3313                                 *
*         Address: #IB457, 1 Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan *
*         WWW: http://www.cs.is.i.nagoya-u.ac.jp/users/ide/index.html         *

############################

Unsubscribe:

MM-INTEREST-signoff-request@LISTSERV.ACM.ORG

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?GETPW1

Once you have created a password, you can log in and view or change your
subscription settings at:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?SUBED1=MM-INTEREST

Back  Top

3-3-11(2023-09-04) CfP 26th Intern.Conf. on text, speech and dialogue (TSD 2023), Plzen (Pilsen), Czech Republic

***************************************************************************
                     TSD 2023 - LAST CALL FOR PAPERS
***************************************************************************

                 Twenty-sixth International Conference on
                   TEXT, SPEECH and DIALOGUE (TSD 2023)

                Pilsen, Czech Republic, 4-7 September 2023
                       http://www.tsdconference.org/


*** The paper submission deadline was postponed! ***

*** NEW *** The best papers' authors will be asked to provide extended
versions of their papers to be published in a topical issue of the Springer
Nature Journal of Computer Science (https://www.springer.com/journal/42979)


The conference is organized by the Faculty of Applied Sciences, University
of West Bohemia, Plzen (Pilsen) in co-operation with the Faculty of
Informatics, Masaryk University, Brno, and is supported by the
International Speech Communication Association.

Venue: Plzen (Pilsen), Czech Republic,
Primavera **** Hotel & Congress Centre


THE IMPORTANT DATES:

Deadline for submission of contributions:       Postponed to April 30, 2023
Notification of acceptance or rejection:        May 22, 2023
Deadline for submission of camera-ready papers: June 4, 2023
TSD 2023:                                       September 4-7, 2023


TSD SERIES

The TSD series has evolved as a prime forum for interaction between
researchers in both spoken and written language processing from all over
the world. The TSD conference proceedings form a book published by
Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI)
series. The TSD proceedings are regularly indexed by Thomson Reuters
Conference Proceedings Citation Index. Moreover, LNAI series is listed in
all major citation databases such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.


TOPICS

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual, text and
    spoken corpora, large web corpora, disambiguation, specialized
    lexicons, dictionaries)

    Speech Recognition (multilingual, continuous, emotional speech,
    handicapped speaker, out-of-vocabulary words, alternative way of
    feature extraction, new models for acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech (morphological
    and syntactic analysis, synthesis and disambiguation, multilingual
    processing, sentiment analysis, credibility analysis, automatic text
    labeling, summarization, authorship attribution)

    Speech and Spoken Language Generation (multilingual, high fidelity
    speech synthesis, computer singing)

    Semantic Processing of Text and Speech (information extraction,
    information retrieval, data mining, semantic web, knowledge
    representation, inference, ontologies, sense disambiguation,
    plagiarism detection, fake news detection)

    Integrating Applications of Text and Speech Processing (machine
    translation, natural language understanding, question-answering
    strategies, assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual,
    question-answering systems, dialogue strategies, prosody in dialogues)

    Multimodal Techniques and Modelling (video processing, facial
    animation, visual speech synthesis, user modelling, emotions and
    personality modelling)

Papers dealing with text and speech processing in linguistic environments
other than English are strongly encouraged (as long as they are written in
English).


PROGRAM COMMITTEE

Elmar Noth, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Germany (General Chairman)
Rodrigo Agerri, University of the Basque Country, Spain
Eneko Agirre, University of the Basque Country, Spain
Vladimir Benko, Slovak Academy of Sciences, Slovakia
Archna Bhatia, Carnegie Mellon University, United States
Jan Cernocky, Brno University of Technology, Czechia
Simon Dobrisek, University of Ljubljana, Slovenia
Kamil Ekstein, University of West Bohemia, Czechia
Karina Evgrafova, Saint-Petersburg State University, Russia
Yevhen Fedorov, Cherkasy State Technological University, Ukraine
Volker Fischer, EML Speech Technology GmbH, Germany
Darja Fiser, Institute of Contemporary History, Slovenia
Lucie Flek, Philipps-Universitat Marburg, Germany
Bjorn Gamback, Norwegian University of Science and Technology, Norway
Radovan Garabik, Slovak Academy of Sciences, Slovakia
Alexander Gelbukh, Instituto Politecnico Nacional, Mexico
Louise Guthrie, University of Texas at El Paso, United States
Jan Hajic, Charles University, Czechia
Eva Hajicova, Charles University, Czechia
Yannis Haralambous, IMT Atlantique, France
Hynek Hermansky, Johns Hopkins University, United States
Jaroslava Hlavacova, Charles University, Czechia
Ales Horak, Masaryk University, Czechia
Eduard Hovy, Carnegie Mellon University, United States
Denis Jouvet, Inria, France
Maria Khokhlova, Saint Petersburg State University, Russia
Aidar Khusainov, Tatarstan Academy of Sciences, Russia
Daniil Kocharov, Saint Petersburg State University, Russia
Miloslav Konopik, University of West Bohemia, Czechia
Ivan Kopecek, Masaryk University, Czechia
Valia Kordoni, Humboldt University of Berlin, Germany
Evgeny Kotelnikov, Vyatka State University, Russia
Pavel Kral, University of West Bohemia, Czechia
Siegfried Kunzmann, Amazon Alexa Machine Learning, United States
Nikola Ljubesic, Jozef Stefan Institute, Croatia
Natalija Loukachevitch, Lomonosov Moscow State University, Russia
Bernardo Magnini , Fondazione Bruno Kessler, Italy
Oleksandr Marchenko, Taras Shevchenko National University of Kyiv, Ukraine
Vaclav Matousek, University of West Bohemia, Czechia
Roman Moucek, University of West Bohemia, Czechia
Agnieszka  Mykowiecka, Polish Academy of Sciences, Poland
Hermann Ney, RWTH Aachen University, Germany
Joakim Nivre, Uppsala University, Sweden
Juan Rafael  Orozco-Arroyave, University of Antioquia, Colombia
Karel Pala, Masaryk University, Czechia
Maciej Piasecki, Wroclaw University of Science and Technology, Poland
Josef Psutka, University of West Bohemia, Czechia
James Pustejovsky, Brandeis University, United States
German Rigau, University of the Basque Country, Spain
Paolo Rosso, Universitat Politecnica de Valencia, Spain
Leon Rothkrantz, Delft University of Technology, Netherlands
Anna Rumshisky, University of Massachusetts Lowell, United States
Milan Rusko, Slovak Academy of Sciences, Slovakia
Pavel Rychly, Masaryk University, Czechia
Mykola Sazhok, International Research and Training Center for Information Technologies and Systems, Ukraine
Odette Scharenborg, Delft University of Technology, Netherlands
Pavel Skrelin, Saint Petersburg State University, Russia
Pavel Smrz, Brno University of Technology, Czechia
Petr Sojka, Masaryk University, Czechia
Georg Stemmer, Intel Corp., Germany
Marko Robnik Sikonja, University of Ljubljana, Slovenia
Marko Tadic, University of Zagreb, Croatia
Jan Trmal, Johns Hopkins University, Czechia
Tamas Varadi, Hungarian Academy of Sciences, Hungary
Zygmunt Vetulani, Adam Mickiewicz University, Poland
Aleksander Wawer, Polish Academy of Sciences, Poland
Pascal Wiggers, Amsterdam University of Applied Sciences, Netherlands
Marcin Wolinski, Polish Academy of Sciences, Poland
Alina Wroblewska, Polish Academy of Sciences, Poland
Victor Zakharov, Saint Petersburg State University, Russia
Jerneja Zganec Gros, Alpineon, Slovenia


FORMAT OF THE CONFERENCE

The conference programme will include invited keynote speeches given by
respected influential researchers/academics, presentations of accepted
papers in both oral and poster/demonstration form, and interesting social
events. The papers will be presented in plenary and topic-oriented
sessions.

Social events including an excursion to the world-famous Pilsner Urquell
Brewery and a trip in the vicinity of Plzen will allow additional informal
interactions of the conference participants.


KEYNOTE SPEAKERS (known so far)

* Philippe Blache -- Director of Research at the Laboratoire
  Parole et Langage (LPL), Institute of Language, Communication and the
  Brain CNRS & Aix-Marseille University, France

* Ivan Habernal -- Head of the Trustworthy Human Language
  Technologies (TrustHLT) Group   Department of Computer Science,
  Technische universitat Darmstadt, Germany

* Daniela Braga (negotiations in progress) -- Founder and CEO
  at Defined.ai, Bellevue, Washington, United States


SUBMISSION OF PAPERS

Authors are invited to submit a full paper not exceeding 12 pages (in
total, i.e. with all figures, bibliography, etc. included) formatted in
the LNAI/LNCS style. Those accepted will be presented either orally or as
posters. The decision about the presentation format will be based on the
recommendation of the reviewers. Each paper is examined by at least
3 reviewers and the process is double blind.

The authors are asked to submit their papers using the on-line submission
interface accessible from the TSD 2023 web application at
https://www.kiv.zcu.cz/tsd2023/index.php?form=mypapers

The papers submitted to the TSD 2023 must not be under review at any other
conference or other type of publication during the TSD 2023 review cycle,
and must not be previously published or accepted for publication elsewhere.

Authors are also invited to present actual projects, developed software or
interesting material relevant to the topics of the conference. The
presenters of demonstrations should provide an abstract not exceeding one
page. The demonstration abstracts will not appear in the conference
proceedings.

*** NEW *** The best papers' authors will be asked to provide extended
versions of their papers to be published in a topical issue of the
Springer Nature Journal of Computer Science
(https://www.springer.com/journal/42979)


OFFICIAL LANGUAGE

The official language of the conference is English.


ACCOMMODATION

The organizing committee arranged discounted accommodation of appropriate
standards at the conference venue. Details about the conference
accomodation will be available on the TSD 2023 web page at
https://www.kiv.zcu.cz/tsd2023/index.php?page=accommodation

The prices of the accommodation (and limited-budget options) will be
available on the conference website, too.


ADDRESS

All correspondence regarding the conference should be addressed to

    TSD 2023 - KIV
    Faculty of Applied Sciences, University of West Bohemia
    Univerzitni 8, 306 14 Plzen, Czech Republic
    Phone: +420 730 851 103
    Fax: +420 377 632 402 (mark the material with letters 'TSD')
    E-mail: tsd2023@tsdconference.org

The e-mail and the conference phone is looked after by the TSD 2023
conference secretary Ms Marluce Quaresma (speaks English, Portuguese, and
Czech).

The official TSD 2023 homepage is: http://www.tsdconference.org/


LOCATION

The city of Plzen (or Pilsen in Germanic languages) is situated in the
heart of West Bohemia at the confluence of four rivers: Uhlava, Uslava,
Radbuza, and Mze. With its approx. 171,000 inhabitants it is the fourth
largest city in the Czech Republic and an important industrial, commercial,
and administrative centre. It is also the capital of the Pilsen Region. In
addition, it has been elected the European Capital of Culture for 2015 by
the Council of the European Union.

The city of Plzen has a convenient location in the centre of West Bohemia.
The place lied on the crossroads of important medieval trade routes and
nowadays it naturally forms an important highway and railroad junction;
thus, it is easily accessible using both individual and public means of
transport.

Plzen lies 85 km (53 mi) south-westwards from the Czech capital Prague,
222 km (138 mi) from the Bavarian capital Munich, 148 km (92 mi) from the
Saxon capital Dresden, and 174 km (108 mi) from the Upper Austrian capital
Linz. The closest international airport is the Vaclav Havel Airport Prague,
which is 75 km (47 mi) away and one can get from there to Plzen very easily
within about two hours by Prague public transport and a train/bus.

 

 

Back  Top

3-3-12(2023-09-07) Journée scientifique « Modèles de langue pour les domaines de spécialité », Nantes, France

Dans le cadre du GdR CNRS Traitement automatique des langues (GdR TAL), le LS2N organise
une journée scientifique sur le thème des « modèles de langue pour les domaines de
spécialité », le 7 septembre 2023 à Nantes. La journée sera organisée autour de
présentations orales invitées, de présentations sous la forme de posters et de démos (cf.
appel ci-dessous) ainsi que d’une table ronde.


## Thèmes

Les modèles de langue de grande taille (LLM) constituent aujourd’hui le composant central
de toute solution du Traitement Automatique des Langues (TAL). Néanmoins leur
exploitation pour le traitement de domaines de spécialité requiert de faire face à de
nombreux défis en raison de la spécificité thématique, du genre et des caractéristiques
linguistiques et stylistiques de ces domaines.

L’objectif de cette journée est de rassembler des chercheur.ses et des industriel.les des
communautés francophones du TAL, de la RI et de la parole pour échanger et faire le point
sur les dernières avancées et problématiques autour des questions d’exploitation des LLM
pour le traitement des domaines de spécialité.

Nous invitons des communications sur les thèmes suivants (liste non exhaustive) :

* Interdisciplinarité des LLM, avantages et limites de modèles « encore plus grands » par
rapport à des modèles spécialisés ;
* Adaptation des LLM à un domaine de spécialité (pré-entraînement, ajustement, adaptation
des architectures) ;
* Ingénierie de l’instruction et du prompt (few-shot learning, zero-shot learning,
contrastive learning) ;
* Injection de connaissances externes et explicabilité des modèles ;
* Transfert interlingue (cross-lingual transfer) et approches multimodales ;
* Prise en compte des spécificités linguistiques du domaine (style, discours…) ;
* Prise en compte de la quantité et de la nature des ressources disponibles (corpus,
bases de connaissances) pour ce domaine ou des domaines voisins ;
* Récentes avancées dans le domaine médical, scientifique, juridique, financier… ;
* Sous-domaines, biais, diversité et inclusion dans les LLM ;
* Aspects légaux et éthiques au travers des modèles.


## Orateur.trice.s. invité.e.s

Les orateur.trice.s invité.e.s seront précisé.e.s ultérieurement.


## Appel à communications (poster, démo)

Dans le cadre de cette journée, nous invitons les chercheuses et chercheurs, travaillant
sur ces thèmes, dans un cadre académique ou industriel, à présenter leurs travaux (démo
ou poster), même déjà publiés, pour échanger avec des collègues du domaine. Pour cela, il
suffit de soumettre un résumé d’une page maximum, et/ou le poster s’il est déjà existant,
et/ou l’article décrivant les travaux si déjà publié, en français ou en anglais.

* Soumission des résumés/posters/articles : au fil de l’eau et au plus tard le 7 juillet
2023
* Notification aux auteurs : au maximum 1 semaine après réception de la proposition

Site de soumission : https://gdr-tal-nantes.sciencesconf.org/submission/submit


## Inscription

Gratuite mais obligatoire via la page
https://gdr-tal-nantes.sciencesconf.org/registration, avant le 7 juillet 2023.



## Contact

Pour toute question, contactez Solen Quiniou et Nicolas Hernandez
(prénom.nom@univ-nantes.fr).

Back  Top

3-3-13(2023-09-07?) 16th Workshop on Building and Using Comparable Corpora (BUCC2023), Varna, Bulgaria


16th Workshop on Building and Using Comparable Corpora (BUCC)
with Shared Task on Multilingual Terminology Extraction
from Comparable Specialized Corpora

Co-located with RANLP 2023

September 7 or 8, 2023

Workshop website: https://comparable.limsi.fr/bucc2023/

Shared task website: https://comparable.limsi.fr/bucc2023/bucc2023-task.html

RANLP website: http://ranlp.org/ranlp2023/

Workshop proceedings to be published in ACL Anthology

Invited speaker: Sida I. Wang, Meta AI (FAIR)

**************************************************************

MOTIVATION

In the language engineering and the linguistics communities, research in
comparable corpora has been motivated by two main reasons. In language
engineering, on the one hand, it is chiefly motivated by the need to use
comparable corpora as training data for statistical NLP applications
such as statistical and neural machine translation or cross-lingual
retrieval. In linguistics, on the other hand, comparable corpora are of
interest because they enable cross-language discoveries and comparisons.
It is generally accepted in both communities that comparable corpora
consist of documents that are comparable in content and form in various
degrees and dimensions across several languages. Parallel corpora are on
the one end of this spectrum, unrelated corpora on the other.

Comparable corpora have been used in a range of applications, including
Information Retrieval, Machine Translation, Cross-lingual text
classification, etc.? The linguistic definitions and observations
related to comparable corpora can improve methods to mine such corpora
for applications of statistical NLP, for example to extract parallel
corpora from comparable corpora for neural MT. As such, it is of great
interest to bring together builders and users of such corpora.


TOPICS

We solicit contributions on all topics related to comparable (and
parallel) corpora, including but not limited to the following:

Building Comparable Corpora:

* Automatic and semi-automatic methods
* Methods to mine parallel and non-parallel corpora from the web
* Tools and criteria to evaluate the comparability of corpora
* Parallel vs non-parallel corpora, monolingual corpora
* Rare and minority languages, across language families
* Multi-media/multi-modal comparable corpora

Applications of comparable corpora:

* Human translation
* Language learning
* Cross-language information retrieval & document categorization
* Bilingual and multilingual projections
* (Unsupervised) Machine translation
* Writing assistance
* Machine learning techniques using comparable corpora

Mining from Comparable Corpora:

* Cross-language distributional semantics, word embeddings and
pre-trained multilingual transformer models
* Extraction of parallel segments or paraphrases from comparable corpora
* Methods to derive parallel from non-parallel corpora (e.g. to provide
for low-resource languages in neural machine translation)
* Extraction of bilingual and multilingual translations of single words,
multi-word expressions, proper names, named entities, sentences,
paraphrases etc. from comparable corpora
* Induction of morphological, grammatical, and translation rules from
comparable corpora
* Induction of multilingual word classes from comparable corpora

Comparable Corpora in the Humanities:

* Comparing linguistic phenomena across languages in contrastive linguistics
* Analyzing properties of translated language in translation studies
* Studying language change over time in diachronic linguistics
* Assigning texts to authors via authors' corpora in forensic linguistics
* Comparing rhetorical features in discourse analysis
* Studying cultural differences in sociolinguistics
* Analyzing language universals in typological research


IMPORTANT DATES

July 18, 2023: Paper submission deadline
July 31, 2021: Notification of acceptance
August 25, 2021: Camera ready final papers
September 7 or 8, 2023: Workshop date

For updates see the workshop website at
https://comparable.limsi.fr/bucc2023/


PRACTICAL INFORMATION

Workshop registration is via the main conference registration site,
see http://ranlp.org/ranlp2023/index.php/fees-registration/

The workshop proceedings will be published in the ACL Anthology.


SUBMISSION GUIDELINES

Please follow the style sheet and templates (for LaTeX, Overleaf and
MS-Word) provided for the main conference at
http://ranlp.org/ranlp2023/index.php/submissions/
Papers should be submitted as a PDF file using the START conference
manager at https://softconf.com/ranlp23/BUCC/
Submissions must describe original and unpublished work and range
from 4 to 8 pages plus unlimited references.
Reviewing will be double blind, so the papers should not reveal the
authors' identity. Accepted papers will be published in the workshop
proceedings, which will be included in the ACL Anthology.

Double submission policy: Parallel submission to other meetings or
publications is possible but must be immediately (i.e. as soon as known
to the authors) notified to the workshop organizers by e-mail.

For further information and updates see the BUCC 2023 website:
https://comparable.limsi.fr/bucc2023/


BUCC 2023 SHARED TASK
Bilingual Term Alignment in Comparable Specialized Corpora

The BUCC 2023 shared task is on multilingual terminology alignment in
comparable corpora. Many research groups are working on this problem
using a wide variety of approaches. However, as there is no standard way
to measure the performance of the systems, the published results are not
comparable and the pros and cons of the various approaches are not
clear. The shared task aims at solving these problems by organizing a
fair comparison of systems. This is accomplished by providing corpora
and evaluation datasets for a number of language pairs and domains.

Moreover, the importance of dealing with multi-word expressions in
Natural Language Processing applications has been recognized for a long
time. In particular, multi-word expressions pose serious challenges for
machine translation systems because of their syntactic and semantic
properties. Furthermore, multi-word expressions tend to be more
frequent in domain-specific text, hence the need to handle them in tasks
with specialized-domain corpora.

Through the 2023 BUCC shared task, we seek to evaluate methods that
detect pairs of terms that are translations of each other in two
comparable corpora, with an emphasis on multi-word terms in specialized
domains.

For the schedule and further details see the shared task website at
https://comparable.limsi.fr/bucc2023/bucc2023-task.html


WORKSHOP ORGANIZERS

* Reinhard Rapp (University of Mainz and Magdeburg-Stendal University of
Applied Sciences, Germany)
* Pierre Zweigenbaum (Université Paris-Saclay, CNRS, LISN, Orsay, France)
* Serge Sharoff (University of Leeds, United Kingdom)

Contact workshop: reinhardrapp (at) gmx (dot) de
Contact shared task: pz (at) lisn (dot) fr


PROGRAMME COMMITTEE

* Ebrahim Ansari (Institue for Advanced Studies in Basic Sciences, Iran)
* Thierry Etchegoyhen (Vicomtech, Spain)
* Philippe Langlais (Université de Montréal, Canada)
* Yves Lepage (Waseda University, Japan)
* Shervin Malmasi (Amazon, USA)
* Emmanuel Morin (Université de Nantes, France)
* Dragos Stefan Munteanu (RWS, USA)
* Reinhard Rapp (University of Mainz and Magdeburg-Stendal University of
Applied Sciences, Germany)
* Nasredine Semmar (CEA LIST, Paris, France)
* Serge Sharoff (University of Leeds, UK)
* Richard Sproat (OGI School of Science & Technology, USA)
* Tim Van de Cruys (KU Leuven, Belgium)
* Pierre Zweigenbaum (Université Paris-Saclay, CNRS, LISN, Orsay, France)

Back  Top

3-3-14(2023-09-10) Call for Demos @ Affective Computing and Intelligent Interaction Conference (ACII 2023),MIT, MA, USA

 

We are inviting you to submit your papers to the demo track of the annual gathering of the Association for the Advancement Of Affective Computing’s (AAAC) 2023 Affective Computing and Intelligent Interaction Conference (ACII 2023). It is the perfect opportunity to showcase demonstrations of recent advancements in this space. Technologies, multimodal products, and prototypes are all welcome for presentation, to engage with an international community of experts from various corners of social science, computer science, and data science. 


Demo presenters will benefit from heightened exposure for their work, and feedback from scientists and experts in the field. We encourage creative, artistic and thought-provoking experiences, and research prototypes and commercial products. Demos are also an excellent opportunity to find beta testers and other collaborators. A two-page extended abstracts is required and will be published in the conference proceedings for all accepted demos.


Website: https://acii-conf.net/2023/calls/demos/

Affective Computing aims at the study and development of systems and devices that use emotion, in particular in human computer and human robot interaction. It is an interdisciplinary field spanning computer science, psychology, and cognitive science. The Demo track of the annual gathering of the Association for the Advancement Of Affective Computing’s (AAAC) 2023 Affective […]
acii-conf.net
Location: MIT Media Lab, Cambridge, MA, USA

Sep 10 - Sep 13, 2023


Important Dates

Submission Deadline:  30th June, 2023

Notification of Acceptance: 11th July, 2023

Camera Ready Submissions: 1st August, 2023

Conference Dates: 10-13th September 2023


Requirements

Submissions of demonstrations of research prototypes and innovative commercial products related to applications of affective computing and intelligent interaction are welcome. 


Submission Guidelines

The submission materials should be combined into a single document including the following:


  1. A two-page (+ one additional page for references) extended abstract to be published in the ACII 2023 proceedings in the conference paper format (LaTeX/Word templates). The abstract should include (1) a brief background introduction and description of the system, (2) a summary of the technical contributions of the system, and (3) results of a formal evaluation or an experiment to be conducted during the demo (if applicable).

  2. A URL link to a video recording demonstrating how the system works.


The demo papers submission process will be handled through the Easy Chair system. Please note that the demo presenters must register for the conference.


Evaluation Criteria

Each demo paper will be reviewed and selected based on its scientific contribution, originality and innovation, social impact, potential for widespread use, and relevance to topics of ACII. Demo papers should include detailed technical explanations of the methods used. Similar demos will be grouped together to maximize the visibility of the presenters and their technologies.


Contacts:

Please direct all questions to Prasanth Murali (murali.pr@northeastern.edu) or Daniel McDuff (dmcduff@google.com).

Back  Top

3-3-15(2023-09-10) Cfp Affective Computing and Intelligent Interaction (ACII) Conference 2023, Cambridge, MA, USA


 

The Association for the Advancement of Affective Computing (AAAC) invites you to join us at our 11th International Conference on Affective Computing and Intelligent Interaction (ACII), which will be held in Cambridge, Massachusetts, USA,on September 10th – 13th, 2023. 

The Conference series on Affective Computing and Intelligent Interaction is the premier international venue for interdisciplinary research on the design of systems that can recognize, interpret, and simulate human emotions and, more generally, affective phenomena. All accepted papers are expected to be included in IEEE Xplore (conditional on the approval by IEEE Computer Society) and indexed by EI. A selection of the best articles at ACII 2023 will be invited to submit extended versions to the IEEE Transactions on Affective Computing.

The theme of ACII 2023 is “Affective Computing: Context and Multimodality”. Fully understanding, predicting, and generating affective processes undoubtedly requires the careful integration of multiple contextual factors (e.g., gender, personality, relationships, goals, environment, situation, and culture), information modalities (e.g., audio, images, text, touch, and smells) and evaluation in ecological environments. Thus, ACII 2023 especially welcomes submitted research that assesses and advances Affective Computing’s ability to do this integration.

Topics of interest include, but are not limited to:

Recognition and Synthesis of Human Affect from ALL Modalities

  • Multimodal Modeling of Cognitive and Affective States

  • Contextualized Modeling of Cognitive and Affective States

  • Facial and Body Gesture Recognition, Modeling and Animation
  • Affective Speech Analysis, Recognition and Synthesis
  • Recognition and Synthesis of Auditory Affect Bursts (Laughter, Cries, etc.)
  • Motion Capture for Affect Recognition
  • Affect Recognition from Alternative Modalities (Physiology, Brain Waves, etc.)
  • Affective Text Processing and Sentiment Analysis
  • Multimodal Data Fusion for Affect Recognition
  • Synthesis of Multimodal Affective Behavior
  • Summarisation of Affective Behavior


Affective Science using Affective Computing Tools

  • Studies of affective behavior perception using computational tools

  • Studies of affective behavior production using computational tools

  • Studies of affect in medical/clinical settings using computational tools

  • Studies of affect in context using computational tools


Psychology & Cognition of Affect in Designing Computational Systems

  • Computational Models of Affective Processes

  • Issues in Psychology & Cognition of Affect in Affective Computing Systems

  • Cultural Differences in Affective Design and Interaction 

 

Affective Interfaces

  • Interfaces for Monitoring and Improving Mental and Physical Well-Being

  • Design of Affective Loop and Affective Dialogue Systems

  • Human-Centred Human-Behaviour-Adaptive Interfaces

  • Interfaces for Attentive & Intelligent Environments

  • Mobile, Tangible and Virtual/Augmented Multimodal Proactive Interfaces

  • Distributed/Collaborative Multimodal Proactive Interfaces

  • Tools and System Design Issues for Building Affective and Proactive Interfaces

  • Evaluation of Affective, Behavioural, and Proactive Interfaces

 Affective, Social and Inclusive Robotics and Virtual Agents

  • Artificial Agents for Supporting Mental and Physical Well-Being
  • Emotion in Robot and Virtual Agent Cognition and Action
  • Embodied Emotion
  • Biologically-Inspired Architectures for Affective and Social Robotics
  • Developmental and Evolutionary Models for Affective and Social Robotics
  • Models of Emotion for Embodied Conversational Agents
  • Personality in Embodied Conversational Agents
  • Memory, Reasoning, and Learning in Affective Conversational Agents


Affect and Group Emotions

  • Analyzing and modeling groups taking into account emergent states and/or emotions

  • Integration of artificial agents (robots, virtual characters) in the group life by leveraging its affective loop: interaction paradigms, strategies, modalities, adaptation

  • Collaborative affective interfaces (e.g., for inclusion, for education, for games and entertainment)

Open Resources for Affective Computing

  • Shared Datasets for Affective Computing

  • Benchmarks for Affective Computing

  • Open-source Software/Tools for Affective Computing

 

Fairness, Accountability, Privacy, Transparency and Ethics in Affective Computing   

  • Bias, imbalance and inequalities in data and modeling approaches in the context of Affective Computing

  • Bias mitigation in the context of Affective Computing

  • Explainability and Transparency in the context of Affective Computing

  • Privacy-preserving affect sensing and modeling

  • Ethical aspects in the context of Affective Computing


Applications

  • Health and well-being
  • Education
  • Entertainment
  • Consumer Products
  • User Experience

Important dates

Main track submissions: 14 April 2023

Decision notification to authors: 2 June 2023

Camera ready submission for main track: 16 June 2023

 

The remaining important dates can be found at the ACII website.

 

We hope to see you at ACII 2023!

ACII2023 Organizers

AFFECTIVE COMPUTING & INTELLIGENT INTERACTION

Back  Top

3-3-16(2023-09-11) 24th Annual Meeting of SIGDIAL/INLG, Prague, Czech Republic

The 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) and the 16th International Natural Language Generation Conference (INLG) will be held jointly in Prague on September 11-15, 2023.

 

The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in dialogue and discourse to both academic and industry researchers, continuing a series of 23 successful previous meetings. The conference is sponsored by the SIGDIAL organization - the Special Interest Group in discourse and dialogue for ACL and ISCA.

 

Topics of Interest

 

We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:

 

  *   Discourse Processing: Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval. Discourse issues in text generated by large language models.

  *   Dialogue Systems: Task oriented and open domain spoken, multi-modal, embedded, situated, and text-based dialogue systems, their components, evaluation and applications. Knowledge representation and extraction for dialogue. State representation, tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Safety and ethics issues in Dialogue.

  *   Corpora, Tools and Methodology: Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.

  *   Pragmatic and Semantic Modeling: Pragmatics and semantics of conversations(i.e., beyond a single sentence), e.g., rational speech act, conversation acts, intentions, conversational implicature, presuppositions.

  *   Applications of Dialogue and Discourse Processing Technology.

 

Submissions

 

The program committee welcomes the submission of long papers, short papers, and demo descriptions. Submitted long papers may be accepted  for oral or for poster presentation. Accepted short papers will be presented as posters.



  *   Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be no longer than 8 pages, including title, text, figures and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices containing sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  *   Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead short papers should have a point that can be made in a few pages, such as a small, focused contribution; a negative result; or an interesting application nugget. Short papers should be no longer than 4 pages including title, text, figures and tables. An unlimited number of pages is allowed for references. One additional page is allowed for sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  *   Demo descriptions should be no longer than 4 pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.

 

Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.

 

Multiple Submissions

 

SIGDIAL 2023 cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of SIGDIAL. Overlap with the SIGDIAL workshop submissions is permitted for non-archived workshop proceedings. Any questions regarding submissions can be sent to program-chairs [at] sigdial.org.

 

Blind Review

 

Building on previous years’ move to anonymous long and short paper submissions, SIGDIAL  2023 will follow the ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.

 

Submission Format

 

All long, short, and demonstration submissions must follow the two-column ACL format, which are available as an Overleaf template and also downloadable directly (Latex and Word)

 

Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

 

Submission Deadline

 

SIGDIAL will accept regular submissions through the Softconf/START system, as well as commitment of already reviewed papers through the ACL Rolling Review (ARR) system.

 

Regular submission

 

Authors have to fill in the submission form in the Softconf/START system and upload an initial pdf of their papers before May 15, 2023 (23:59 GMT-11).  Details and the submission link will be posted on the conference website.

 

Submission via ACL Rolling Review (ARR)

 

Please refer to the ARR Call for Papers for detailed information about submission guidelines to ARR. The commitment deadline for authors to submit their reviewed papers, reviews, and meta-review to SIGDIAL 2023 is June 19, 2023. Note that the paper needs to be fully reviewed by ARR in order to make a commitment, thus the latest date for ARR submission will be April 15, 2023.

 

Mentoring

 

Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.

 

Best Paper Awards

 

In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2023 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.




SIGDIAL 2023 Program Committee

Svetlana Stoyanchev and Shafiq Rayhan Joty

Conference Website: https://2023.sigdial.org/

Back  Top

3-3-17(2023-09-11) Call for Workshops - Affective Computing and Intelligent Interaction (ACII) 2023, MIT MediaLab, Cambridge, MA, USA

 

The organizing committee of Affective Computing and Intelligent Interaction (ACII) 2023 is now inviting proposals for workshops and challenges. The biennial conference is the flagship conference for research in Affective Computing, covering topics related to the study of intelligent systems that read, express, or otherwise use emotion.

Workshops at ACII allow a group of scientists an opportunity to get together to network and discuss a specific topic in detail. Examples of past workshops include: Affective Computing and Intelligent Interaction, Applied Multimodal Affect Recognition, Functions of Emotions for Socially Interactive Agents, Emotions in Games, Affective Brain-Computer Interfaces, Affective Touch, Group Emotions, and Affective Computing for Affective Disorders. We want to encourage workshop proposals that draw together interdisciplinary perspectives on topics in affective computing. We also welcome Challenge-type workshops, where workshop participants would work on a shared task. This year, given our location in Boston and proximity to leading medical institutions, we particularly invite workshops that touch on health and wellness, spanning theoretical topics on affect in mental health to fielded medical applications of affective computing.  

Workshops should focus on a central question or topic. Workshop organizers will be responsible for soliciting and reviewing papers, and putting together an exciting schedule, including time for networking and discussion. Workshop organizers are also expected to present a short summary of the workshop during the main conference.

Example workshops from ACII2022 are available at: https://acii-conf.net/2022/workshops/ 
The workshop proposals website: https://acii-conf.net/2023/calls/workshops/ 
ACII 2023 website: https://acii-conf.net/2023/  

What’s next?
Send your workshop proposal to both workshop chairs.  Please include the following (max three pages):  
  1. Title.
  2. Organizers and affiliations, and Workshop contact person
  3. Extended abstract making the scientific case for the workshop (why, why now, why at ACII, expected outcomes, impact)
  4. Advertisement (e.g. lists, conferences etc., and website hosting (where)).
  5. List of tentative and confirmed PC members (mention this status per PC member)
  6. Expected number of submissions, planned acceptance rate, and paper length, review process.
  7. Tentative/confirmed keynote speaker(s).
  8. Length of the workshop (day or half-day).
  9. List of related and previous workshops/conferences
  10. Your publication plan (e.g., Special Issue, whether contact was made with the publisher already
Process
Proposals will be reviewed in a confidential manner and acceptance will be decided by the ACII 2023 Workshop Chairs and ACII 2023 Senior Program Committee. Decisions about acceptance are final.

Important dates
February 17, 2023: Workshop proposal submission deadline.
Refer to https://acii-conf.net/2023/important-dates/ for other dates.

Workshop Chairs
Timothy Bickmore, Northeastern University, t.bickmore@northeastern.edu
Nutchanon Yongsatianchot, Northeastern University, n.yongsatianchot@northeastern.edu
Back  Top

3-3-18(2023-09-11) Cf Workshops and Tutorials/24th Annual Meeting of SIGDIAL/INLG, Prague, Czech Republic

The 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023) and the 16th International Natural Language Generation Conference (INLG 2023) will be held jointly in Prague on September 11-15, 2023. We now welcome the submission of workshop and tutorial proposals, which will take place on September 11 and 12 before the main conference. 


We encourage submissions of proposals on any topic of interest to the discourse, dialogue, and natural language generation communities. This program is intended to offer new perspectives and bring together researchers working on related topics. We especially encourage the sessions that would bring together researchers from SIGDial and INLG communities.  


Topics of interest include all aspects related to Dialogue, Discourse and Generation including (but not limited to) annotation and resources, evaluation, large language models, adversarial and RL methods, explainable/ethical AI, summarization, interactive/multimodal/situated/incremental systems, data/knowledge/vision-to-text, and applications of dialogue and NLG.


The proposed workshops/tutorials may include a poster session, a panel session, an oral presentation session, a hackathon, a generation/dialogue challenge, or a combination of the above. Workshop organizers will be responsible for soliciting, reviewing, and selecting papers or abstracts. The workshop papers will be published in a separate proceedings. Workshops may, at the discretion of the SIGDial/INLG organizers, be held as parallel sessions. 


Submissions


Workshop and Tutorial proposals should be 2-4 pages containing:  title, type (workshop or tutorial), a summary of the topic, motivating theoretical interest and/or application context; a list of organizers and sponsors; duration (half-day or full-day), and a requested session format(s): poster/panel/oral/hackathon session. Please include the number of expected attendees. The workshop proposals will be reviewed jointly by the general chair and program co-chairs.


Links


Those wishing to propose a workshop or tutorial may want to look at some of the sessions organized at recent SIGDial meetings:


Natural Language in Human Robot Interaction (NLiHRI 2022) 

NLG4health 2022

SummDial 2021

SafeConvAI 2021 

RoboDIAL 2022

BigScience Workshop: LLMs 2021

Interactive Natural Language Technology for Explainable Artificial Intelligence 2019 

https://www.inlg2019.com/workshop

https://www.sigdial.org/files/workshops/conference18/sessions.htm

https://inlg2018.uvt.nl/workshops/



Important Dates


Mar 24, 2023: Workshop/Tutorial Proposal Submission Deadline

April 14, 2023: Workshops/Tutorials Notifications


The  proposals should be sent to conference@sigdial.org
Back  Top

3-3-19(2023-09-19) CfP ACM IVA 2023 @ Würzburg, Germany.

CALL FOR PAPERS  --  ACM IVA 2023

 

The annual ACM Conference on Intelligent Virtual Agents (IVA) is the premier international event for interdisciplinary research on the development, application, and evaluation of Intelligent Virtual Agents with a focus on the ability for social interaction, communication or cooperation. Such artificial agents can be embodied graphically (e.g. virtual characters, embodied conversational agents) or physically (e.g. social or collaborative robots). They are capable of real-time perception, cognition, emotion and action allowing them to participate in dynamic social environments. This includes human-like interaction qualities such as multimodal communication using facial expressions, speech, and gesture, conversational interaction, socially assistive and affective interaction, interactive task-oriented cooperation, or social behaviour simulation. IVAs are highly relevant and widely applied in many important domains including health, tutoring, training, games, or assisted living.

 

We invite submissions of research on a broad range of topics, including but not limited to: theoretical foundations of intelligent virtual agents, agent and interactive behaviour modelling, evaluation, agents in simulations, games, and other applications. Please see the detailed list of topics below.

 

VENUE

======

IVA 2023 will take place in Würzburg, Germany. Würzburg is a vibrant town located by the river Main in northern Bavaria, between Frankfurt and Nuremberg. The mix of stunning historical architecture and the young population is what makes the atmosphere so unique, including 35,000 students from three different universities. The mild and sunny climate is ideal to enjoy the many activities Würzburg has to offer: visiting a beer garden next to the river, attending a sporting or cultural event or taking a stroll through one of the parks.

 

IVA is targeted to be an in-person conference. In case of extraordinary circumstances, such as visa problems or health issues, video presentation will be possible. However, there is no digital or hybrid conference system planned, thus it is not possible to attend this year’s IVA conference remotely.

 

IMPORTANT DATES

===================

Abstract submission: April 14, 2023

Paper submission: April 18, 2023

Review notification / start of rebuttal: May 31, 2023

Notification of acceptance: June 23, 2023

Camera ready deadline: July 18, 2023

Conference: September 19-22, 2023

 

All deadlines are anywhere on earth (UTC−12).

 

SPECIAL TOPIC

===================

This year’s conference will highlight a special topic on “IVAs in future mixed realities”, e.g., in social VR and potential incarnations of a Metaverse. Immersive and potentially distributed artificial virtual worlds provide new forms of full-size embodied human-human interaction via avatars of arbitrary looks, enabling interesting intra- and interpersonal effects. They also enable hybrid avatar-agent interactions between humans and A.I.s, unlocking the full potential of non-verbal behavior in digital face-to-face encounters, significantly enhancing the design space for IVAs to assist, guide, help but also to persuade and affect interacting users. We specifically welcome all kinds of novel research on technological, psychological, and sociological determinants of such immersive digital avatar-agent encounters.

 

TYPES OF SUBMISSION

===================

- Full Papers (7 pages + 1 additional page for references):

Full papers should present significant, novel, and substantial work of high quality.

 

- Extended Abstracts (2 pages + 1 additional page for references)

Extended abstracts may contain early results and work in progress.

 

- Demos (2 pages + 1 additional page for references +1 one page with demo requirements)

Demos submissions focus on implemented systems and should contain a link to a video of the system with a maximum length of 5 minutes.

 

All submissions will be double-blind peer-reviewed by a group of external expert reviewers. All accepted submissions will be published in the ACM proceedings.

 

Accepted full papers will be presented either in oral sessions or as posters during the conference (depending on the nature of the contribution), extended abstracts will be presented as posters, and demos will be showcased in dedicated sessions during the conference. For each accepted contribution, at least one of the authors must register for the conference.

 

IVA 2023 will also feature workshops and a doctoral consortium. Please visit the website (https://iva.acm.org/2023) for more details and updates.

 

TRACKS

===================

For full paper submissions, IVA will have different paper tracks with different review criteria. Authors need to indicate which one of the following tracks they want to submit their paper to:

 

1. Empirical Studies

  • criteria: methodology, theoretical foundation, originality of result etc.

2. Computational Models and Methods

  • criteria: technical soundness, novelty of the model or approach, proof of concept, etc.

3. Operational Systems and Applications

  • criteria: innovation of the application, societal relevance, evaluation of effects, etc.

 

 

SCOPE AND LIST OF TOPICS

========================

IVA invites submissions on a broad range of topics, including but not limited to:

 

AGENT DESIGN AND MODELING:

- Cognition (e.g. task, social, other)

- Emotion, personality and cultural differences

- Socially communicative behaviour (e.g., of emotions, personality, relationship)

- Conversational and dialog behavior

- Social perception and understanding of other’s states or traits

- Machine learning approaches to agent modeling

- Adaptive behavior and interaction dynamics

- Models informed by theoretical and empirical research from psychology

 

MULTIMODAL INTERACTION:

- Verbal and nonverbal behavior coordination (synthesis)

- Multimodal/social behavior processing

- Face-to-face communication skills

- Interaction qualities  engagement, rapport, etc.)

- Managing co-presence and interpersonal relation

- Multi-party interaction

- Data-driven modeling

 

SOCIALLY INTERACTIVE AGENT ARCHITECTURES:

- Design criteria and design methodologies

- Engineering of real-time human-agent interaction

- Standards / measures to support interoperability

- Portability and reuse

- Specialized tools, toolkits, and toolchains

 

EVALUATION METHODS AND EMPIRICAL STUDIES:

- Evaluation methodologies and user studies

- Metrics and measures

- Ethical considerations and societal impact

- Applicable lessons across fields (e.g. between robotics and virtual agents)

- Social agents as a means to study and model human behavior

 

APPLICATIONS:

- Applications in education, skills training, health, counseling, games, art, etc.

- Virtual agents in games and simulations

- Social agents as tools in psychology, neuroscience, social simulation, etc

- Migration between platforms

 

INSTRUCTIONS FOR AUTHORS

=========================

Paper submissions should be anonymous and prepared in the “ACM Standard” format, more specifically the “SigConf” format. Please consulthttps://www.acm.org/publications/proceedings-template for the Latex template, word interim template, or connection to the overleaf platform.

All papers need to be submitted in PDF-format.

 

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

 

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper.  ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors.  The collection process has started and will roll out as a requirement throughout 2022.  We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

 

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

 

This event is sponsored by SIGAI.

 

Please visit the conference website for detailed information on how to submit your paper.

 

CONFERENCE WEBSITE

===================

https://iva.acm.org/2023/

 

Back  Top

3-3-20(2023-09-20) CBMI 2023, Orléans, France

Call for SS  Proposals at CBMI’2023==================

CBMI’2023  http://cbmi2023.org/ is calling for high quality Special Sessions  addressing innovative research in content – based multimedia indexing  and its related broad fields. The main scope of the conference is in  analysis and understanding of multimedia contents including

  • Multimedia information retrieval (image, audio, video, text)
  • Mobile media retrieval
  • Event-based media retrieval
  • Affective/emotional interaction or interfaces for multimedia retrieval
  • Multimedia data mining and analytics
  • Multimedia retrieval for multimodal analytics and visualization
  • Multimedia recommendation
  • Multimedia verification (e.g., multimodal fact-checking, deep fake analysis)
  • Large-scale multimedia database management
  • Summarization, browsing, and organization of multimedia content
  • Evaluation and benchmarking of multimedia retrieval systems
  • Explanations of decisions of AI-in Multimedia
  • Application domains : health, sustainable cities, ecology, culture… 

and all this in the era of Artificial Intelligence for analysis and indexing of multimedia and multimodal information.

A special oral session will contain oral presentations of long research papers, short papers will be presented as posters during poster sessions with special mention of an SS.

 

-        Long research papers should present complete work with evaluations on topics related to the Conference.

-        Short research papers should present preliminary results or more focused contributions.

-         

An SS proposal has to contain

-        Name, title, affiliation and a short bio of SS chairs;

-        The rational ;

-         A  list of at least 5 potential contributions with a provisional title, authors and affiliation.

 

 

The dead line for SS proposals is coming:  23rd of January

 

Please submit your proposals to the SS chairs

jenny.benois-pineau@u-bordeaux.fr

mourad.oussalah@oulu.fi

adel.hafiane@insa-cvl.fr

Back  Top

3-3-21(2023-09-22) Last Call for Papers: Linguistic Insights from and for Multimodal Language Processing @KONVENS 2023, Ingolstadt, Germany

We invite you to submit to the first edition of our workshop on 'Linguistic Insights from
and for Multimodal Language Processing' that will be co-located with KONVENS 2023 in
Ingolstadt.

The deadline is extended by a week until July 14th, 2023. We are looking forward to your
contribution.

Our website is available at https://sites.google.com/view/limo2023/home

Please find the last CfP below.

Best wishes,
LIMO 2023 organizers

------------------------------------------------------
Last Call for Papers: Linguistic Insights from and for Multimodal Language Processing
@KONVENS 2023

Processing multimodal information (like visual representations of the environment,
auditory cues, images, gestures, gaze etc.) and integrating them is a constant and
effortless process in human language processing. Recent progress in the area of language
& vision, large-scale visually grounded language models, and multimodal learning have led
to breakthroughs in challenging multimodal NLP applications like image–text retrieval,
image captioning or visual question answering. Yet, modeling the semantics and pragmatics
of situated language understanding and generation and, generally, language processing
beyond the linguistic context, i.e. in combination with multiple other modalities, is
still one of the biggest challenges in NLP and Computational Linguistics.

While there have been recent venues and workshops targeting multimodal representation
learning and large-scale Language and Vision models, there is a lack of discussion in the
community that focuses on linguistic multimodal phenomena, domain- and task-specific
analyses of multimodality and, generally, contributions of computational linguistics to
multimodal learning and vice versa.  With this workshop, we aim to bring together
researchers who work on various linguistic aspects of multimodal language processing to
discuss and share the recent advances in this interdisciplinary field.

Topics of interest:
    ▪    New multimodal datasets and training schemes in text and dialogue
    ▪    Multimodal tasks and frameworks in CL
    ▪    Annotation of multimodal datasets
    ▪    Modeling and analysis of linguistic phenomena in multimodal datasets
    ▪    Analysis and discussion of shortcomings of existing multimodal language models
    ▪    Approaches to multimodality in different domains, e.g., documents, social media,
visual dialogue, situated dialogue, etc.
    ▪    Work on cross-modal and inter-modal representations, relationships, and
dependencies
    ▪    Opinion pieces and (theoretical) reflections on multimodality in
NLP/Computational Linguistics


Keynote Speakers
---------------
Sandro Pezzelle, University of Amsterdam
Letitia Parcalabescu, Heidelberg University


Important Dates
---------------
July 14, 2023     – Paper Submission Deadline (Extended)
August 21, 2023     – Notification of Acceptance
September 04, 2023     – Camera-ready Deadline
September 22, 2023     – Workshop Day (one of these days)

* All deadlines are 11:59 PM UTC-12:00 ('anywhere on Earth')

Paper Submission:
-----------------
All paper submissions must use the official KONVENS 2023 style templates.
    ▪    Long papers must not exceed eight (8) pages of content.
    ▪    Short papers and demonstration papers must not exceed four (4) pages of content.
    ▪    Non-archival abstracts must not exceed one (1) page.
09-22) 

Organizers:
---------------
Piush Aggarwal, FernUniversität in Hagen, piush.aggarwal@fernuni-hagen.de
Özge Alaçam, Universität Bielefeld, oezge.alacam@uni-bielefeld.de
Carina Silberer, Universität Stuttgart, carina.silberer@ims.uni-stuttgart.de
Sina Zarrieß, Universität Bielefeld, sina.zarriess@uni-bielefeld.de
Torsten Zesch, FernUniversität in Hagen, torsten.zesch@fernuni-hagen.de


Contact:
---------
Özge Alaçam (oezge.alacam@uni-bielefeld.de), Piush Aggarwal
(piush.aggarwal@fernuni-hagen.de)
Workshop webpage: https://sites.google.com/view/limo2023/home
KONVENS 2023 webpage: https://www.thi.de/konvens-2023/

Back  Top

3-3-22(2023-10-09) Cf Tutorials ICMI 2023, Paris, France
=====================================
* Deadline extended to 22 May 2023 *
=====================================
ICMI 2023 2nd Call for tutorial proposals
https://icmi.acm.org/2023/call-for-tutorials/
25th ACM International Conference on Multimodal Interaction
9-13 October 2023, Paris, France
=====================================
 
ACM ICMI 2023 seeks half-day (3-4 hours) tutorial proposals addressing current and emerging topics within the scope of 'Science of Multimodal Interactions'.  Tutorials are intended to provide a high-quality learning experience to participants with a varied range of backgrounds. It is expected that tutorials are self-contained.
 
Prospective organizers should submit a 4-page (maximum) proposal containing the following information:
 
1. Title
2. Abstract appropriate for possible Web promotion of the Tutorial
3. A short list of the distinctive topics to be addressed
4. Learning objectives (specific and measurable objectives)
5. The targeted audience (student / early stage / advanced researchers, pré-requisite knowledge, field of study)
6. Detailed description of the Tutorial and its relevance to multimodal interaction
7. Outline of the tutorial content with a tentative schedule and its duration
8. Description of the presentation format (number of presenters, interactive sessions, practicals)
9. Accompanying material (repository, references) and equipment, emphasizing any required material from the organization committee (subject to approval)
10. Short biography of the organizers (preferably from multiple institutions) together with their contact information and a list of 1-2 key publications related to the tutorial topic
11. Previous editions: If the tutorial was given before, describe when and where it was given, and if it will be modified for ACM ICMI 2023.
 
Proposals will be evaluated using the following criteria:
 
- Importance of the topic and the relevance to ACM ICMI 2023 and its main theme: 'Science of Multimodal Interactions'
- Presenters' experience
- Adequateness of the presentation format to the topic
- Targeted audience interest and impact
- Accessibility and quality of accompanying materials (open access)
 
Proposals that focus exclusively on the presenters' own work or commercial presentations are not acceptable.
 
Unless explicitly mentioned and agreed by the Tutorial chairs, the tutorial organizers will take care of any specific requirements which are related to the tutorial such as specific handouts, mass storages, rights of distribution (material, handouts, etc.), copyrights, etc.
 
Contact Details
===============
Proposals should be emailed to the ICMI 2023 Tutorial Chairs, Prof. Hatice Gunes and Dr. Guillaume Chanel:  icmi2023-tutorial-chairs@acm.org
 
Prospective organizers are also encouraged to contact the co-chairs if they have any questions.
 
Important Dates
===============
Tutorial Proposal Deadline   May 22, 2023 (extended)
Tutorial Acceptance Notification   June 5, 2023
Camera-ready version of the tutorial abstract July 3, 2023
Tutorial Dates Either 9 or 13 October 2023
Back  Top

3-3-23(2023-10-09) 25th ACM International Conference on Multimodal Interaction (ICMI 2023), Paris, France

25th ACM International Conference on Multimodal Interaction (ICMI 2023)

9-13 October 2023, Paris, France

 

The 25th International Conference on Multimodal Interaction (ICMI 2023) will be held in Paris, France. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data and systems. The study of social interactions englobes both human-human interactions and human-computer interactions. A unique aspect of ICMI is its multidisciplinary nature which values both scientific discoveries and technical modeling achievements, with an eye towards impactful applications for the good of people and society.

 

ICMI 2023 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits, doctoral consortium, and late-breaking papers. The conference will also feature tutorials, workshops and grand challenges. The proceedings of all ICMI 2023 papers, including Long and Short Papers, will be published by ACM as part of their series of International Conference Proceedings and Digital Library, and the adjunct proceedings will feature the workshop papers.

 

Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2023 will need to be novel along one of the two dimensions:

  • Scientific Novelty: Papers should bring new scientific knowledge about human social interactions, including human-computer interactions. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
  • Technical Novelty: Papers should propose novelty in their computational approach for recognizing, generating or modeling multimodal data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach.

 

Please see the Submission Guidelines for Authors https://icmi.acm.org/ for detailed submission instructions. Commitment to ethical conduct is required and submissions must adhere to ethical standards in particular when human-derived data are employed. Authors are encouraged to read the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).

 

ICMI 2023 conference theme: The theme for this year’s conference is “Science of Multimodal Interactions”. As the community grows, it is important to understand the main scientific pillars involved in deep understanding of multimodal social interactions. As a first step, we want to acknowledge key discoveries and contributions that the ICMI community enabled over the past 20+ years. As a second step, we reflect on the core principles, foundational methodologies and scientific knowledge involved in studying and modeling multimodal interactions. This will help establish a distinctive research identity for the ICMI community while at the same time embracing its multidisciplinary collaborative nature. This research identity and long-term agenda will enable the community to develop future technologies and applications while maintaining commitment to world-class scientific research.

Additional topics of interest include but are not limited to:

  • Affective computing and interaction
  • Cognitive modeling and multimodal interaction
  • Gesture, touch and haptics
  • Healthcare, assistive technologies
  • Human communication dynamics
  • Human-robot/agent multimodal interaction
  • Human-centered A.I. and ethics
  • Interaction with smart environment
  • Machine learning for multimodal interaction
  • Mobile multimodal systems
  • Multimodal behaviour generation
  • Multimodal datasets and validation
  • Multimodal dialogue modeling
  • Multimodal fusion and representation
  • Multimodal interactive applications
  • Novel multimodal datasets
  • Speech behaviours in social interaction
  • System components and multimodal platforms
  • Visual behaviours in social interaction
  • Virtual/augmented reality and multimodal interaction

 

Important Dates

Paper Submission: May 1, 2023 

Rebuttal period: June 26-29, 2023

Paper notification: July 21, 2023

Camera-ready paper: August 14, 2023

Presenting at main conference: October 9-13, 2023

 

Back  Top

3-3-24(2023-10-09) ACM ICMI 2023 2ND CALL FOR BLUE SKY PAPERS, Paris France

===========================================
ACM ICMI 2023 2ND CALL FOR BLUE SKY PAPERS   
===========================================
9-13 October 2023, Paris - France
https://icmi.acm.org/2023/
===========================================

ICMI 2023 is pleased to partner with the Computing Community Consortium (CCC) to continue the Blue Sky Paper track, initialized in 2021 and continued in 2022, that emphasizes innovative, visionary, and high-impact contributions. This track solicits papers relevant to ICMI content that go beyond the usual research paper to present new visions that stimulate the ICMI community to pursue innovative directions. They may challenge existing assumptions and methodologies or propose new applications or theories. The papers are encouraged to present high-risk controversial ideas. Submitted papers are expected to represent deep reflection, argue rigorously, and present ideas from a high-level synthetic viewpoint (e.g., multidisciplinary, based on multiple methodologies).

The review of the submissions will be handled by the Blue Sky Paper Chairs: Carlos Busso (University of Texas At Dallas), Philippe Palanque (University Toulouse III, France), and Björn Schuller (University of Augsburg, Germany). Three winners will be selected for presentation in the Blue Sky Paper track and publication in the conference proceedings. The CCC will sponsor awards to honor the first ($1,000), second ($750), and third ($500) place winners in the form of travel grants. In addition, they will further distribute and publicize the three Blue Sky award papers.

Important Dates
---------------
Paper Submission                June 17th, 2023
Paper notification                July 14th, 2023
Camera-ready paper                August 14th, 2023
Presenting at main conference        October 9-13, 2023

Back  Top

3-3-25(2023-10-09) ACM ICMI 2023 CALL FOR DEMONSTRATIONS AND EXHIBITS, Paris, France
ACM ICMI 2023 CALL FOR DEMONSTRATIONS AND EXHIBITS
9-13 October 2023, Paris - France
https://icmi.acm.org/2023/doctoral-consortium/
========================================================

We invite you to submit your proposals for demonstrations and exhibits to be held during the 25th ACM International Conference on Multimodal Interaction (ICMI 2023), located in Paris, France, October 9-13th 2023. This year’s conference theme is “Science of Multimodal Interactions”.

* Demonstrations and Exhibits
The ICMI 2023 Demonstrations & Exhibits session is intended to provide a forum to showcase innovative implementations, systems and technologies demonstrating new ideas about interactive multimodal interfaces. It can also serve as a platform to introduce commercial products.
Proposals may be of two types: demonstrations or exhibits. The main difference is that demonstrations include a 2-3 page paper in one column, which will be included in the ICMI main proceedings, while the exhibits only need to include a brief outline (no more than two pages in one column; not included in ICMI proceedings). We encourage both the submission of early research prototypes and interesting mature systems. In addition, authors of accepted regular research papers may be invited to participate in the demonstration sessions as well.

* Demonstration Submission
Please submit a 2-3 page description of the demonstration in a single column format through the main ICMI conference management system (new.precisionconference.com/sigchi). Demonstration description(s) must be in PDF format, according to the ACM conference format, of no more than 3 pages in a single column format including references. For instructions and links to the templates, please see the Guidelines for Authors (https://icmi.acm.org/2023/guidelines-for-authors/).
Demonstration proposals should include a description with photographs and/or screen captures of the demonstration. Demonstration submissions should be accompanied by a video of the proposed demo (no larger than 200MB), which can include a set of slides (no more than 10 slides) in PowerPoint format.
The demo and exhibit paper submissions are not anonymous. However, all ACM rules and guidelines related to paper submission should be followed (e.g. plagiarism, including self-plagiarism).
The demonstration submissions will be peer reviewed, according to the following criteria: suitability as a demo, scientific or engineering feasibility of the proposed demo system, application, or interactivity, alignment with the conference focus, potential to engage the audience, and overall quality and presentation of the written proposal. Authors are encouraged to address such criteria in their proposals, along with preparing the papers mindful of the quality and rigorous scientific expectations of an ACM publication.
The demo program will include the accepted proposals and may additionally include invited demos from among regular papers accepted for presentation at the conference. Please note that the accepted demos will be included in the ICMI main proceedings.

* Exhibit Submission
Exhibit proposals should be submitted following the same guidelines, formatting, and due dates as for demonstration proposals. Exhibit proposals must be shorter in length (up to two pages), and are more suitable for showcasing mature systems. Like demos, submissions for exhibits should be accompanied by a video (no larger than 200MB), which can include a set of slides (no more than 10 slides) in PowerPoint format. Exhibits will not have a paper published in the ICMI 2023 proceedings.

* Facilities
Once accepted, demonstrators and video presenters will be provided with a table, poster board, power outlet and wireless (shared) Internet. Demo and video presenters are expected to bring with them everything else needed for their demo and video presentations, such as hardware, laptops, sensors, PCs, etc. However, if you have special requests such as a larger space, special lighting conditions and so on, we will do our best to arrange them.
Important note for the authors: The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

* Attendance
At least one author of all accepted Demonstrations and Exhibits submissions must register for and attend the conference, including the conference demonstrations and exhibits session(s).

* Important Dates
- Submission of demo and exhibit proposals        July 14, 2023
- Demo and exhibit notification of acceptance        July 28, 2023
- Submission of demo final papers                August 13, 2023

* Questions?
For further questions, contact the Demonstrations and Exhibits co-chairs: Kun Qian and Dirk Heylen (icmi2023-demo-chairs@acm.org).
 
Back  Top

3-3-26(2023-10-09) ACM ICMI 2023 CALL FOR LATE-BREAKING RESULTS, Paris, France

ACM ICMI 2023 CALL FOR LATE-BREAKING RESULTS
9-13 October 2023, Paris - France
https://icmi.acm.org/2023/doctoral-consortium/
========================================================

Based on the success of the LBR in the past ICMI, the ACM International Conference on Multimodal Interaction (ICMI) 2023 continues soliciting submissions for the special venue titled Late-Breaking Results (LBR). The goal of this venue is to provide a way for researchers to share emerging results at the conference. Accepted submissions will be presented in a poster session at the conference, and the extended abstract will be published in the new Adjunct Proceedings (Companion Volume) of the main ICMI Proceedings. Like similar venues at other conferences, the LBR venue is intended to allow sharing of ideas, getting formative feedback on early-stage work, and furthering collaborations among colleagues.

Online Submission
For online paper submissions, please click on the following link:
https://new.precisionconference.com/user/login?next=https%3A//new.precisionconference.com/submissions/icmi23a

Highlights
* Submission deadline: July 16th, 2023, 23:59 PDT (GMT-7) Deadline extended to July 21st, 2023 

* Notifications: August 13th, 2023

* Camera-ready deadline: September 3rd, 2023
* Conference Dates: October  9-13, 2023
* Submission format: Anonymized, short paper (four-page paper in a double column format, not including references), following the submission guidelines, available here: https://icmi.acm.org/2023/guidelines-for-authors/
* Selection process: Peer-Reviewed
* Presentation format: Participation in the conference poster session
* Proceedings: Included in Adjunct Proceedings and ACM Digital Library
* LBR Co-chairs: Jean-Marc Odobez and Chi-Chun Lee

What are Late-Breaking Results?
Late-Breaking Work (LBR) submissions represent work such as preliminary results, provoking and current topics, novel experiences or interactions that may not have been fully validated yet, cutting edge or emerging work that is still in exploratory stages, smaller-scale studies, or in general, work that has not yet reached a level of maturity expected for the full-length main track papers. However, LBR papers are still expected to bring a contribution to the ICMI community, commensurate with the preliminary, short, and quasi-informal nature of this track.

Why submit to the Late-Breaking Results track at ICMI?
Accepted LBR papers will be presented as posters during the conference. This provides an opportunity for researchers to receive feedback on early-stage work, explore potential collaborations, and otherwise engage in exciting thought-provoking discussions about their work in an informal setting that is significantly less constrained than a paper presentation. The LBR (posters) track also offers those new to the ICMI community a chance to share their preliminary research as they become familiar with this field.

Late-Breaking Results papers appear in the Adjunct Proceedings (Companion Volume) of the ICMI Proceedings. Copyright is retained by the authors, and the material from these papers can be used as the basis for future publications as long as there are 'significant' revisions from the original, as per the ACM and ACM SIGCHI policies.

Submission Guidelines
Extended Abstract: An anonymized short paper, four-page paper in a double column ACM conference  format, using LaTeX or Word (excluding references). Papers should follow the same guidelines as papers published in the proceedings of the ACM ICMI conference: https://icmi.acm.org/2023/guidelines-for-authors/. The paper should be submitted in PDF format and through the ICMI submission system in the “Late-Breaking Results” track. Due to the tight publication timeline, it is recommended that authors submit a very nearly finalized paper that is as close to camera-ready as possible, as there will be a very short timeframe for preparing the final camera-ready version and no deadline extensions can be granted.

Anonymization: Authors are instructed not to include author information in their submission. In order to help reviewers judge the situation of the LBR to prior work, authors should not remove or anonymize references to their own prior work. Instead, we recommend that authors obscure references to their own prior work by referring to it in the third person during submission. If desired, after acceptance, such references can be changed to first-person.

Review Process
LBRs will be evaluated to the extent that they are presenting work still in progress, rather than complete work which is under-described in order to fit into the LBR format. The LBR track will undergo an external peer review process. Submissions will be evaluated by a number of factors including (1) the relevance of the work to ICMI, (2) the quality of the submission, and (3) the degree to which it 'fits' the LBR track (e.g., in-progress results). More particularly, the quality of the submission will be evaluated based on the potential contributions of the research to the field of multimodal interfaces and its impact on the field and beyond. Authors should clearly justify how the proposed ideas can bring some measurable breakthroughs compared to the state-of-the-art of the field.

Attendance
Similar rules for registration and attendance will be applied for authors of LBR papers as for regular papers. Further information will be available later on and given on the main page of the website.

Questions?
For more information and updates on the ICMI 2023 Late-Breaking Results (LBR), visit the LBR page of the main conference website: https://icmi.acm.org/2023/late-breaking-results/.

For further questions, contact the LBR co-chairs (Jean-Marc Odobez and Chi-Chun Lee) at icmi2023-late-breaking-results-chairs@acm.org

Back  Top

3-3-27(2023-10-09) CfParticipation GENEA Challenge 2023 on speech-driven gesture generation, Paris, France

Call for participation: GENEA Challenge 2023 on speech-driven gesture generation
Starting date: May 1

Location: Official Grand Challenge of ICMI 2023, Paris, France

Website: https://genea-workshop.github.io/2023/challenge/
*********************************************************************

Overview
*********************

The state of the art in co-speech gesture generation is difficult to assess, since every research group tends to use their own data, embodiment, and evaluation methodology. To better understand and compare methods for gesture generation and evaluation, we are continuing the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge, wherein different gesture-generation approaches are evaluated side by side in a large user study. This 2023 challenge is a Grand Challenge for ICMI 2023 and is a follow-up to the first and second editions of the GENEA Challenge, arranged in 2020 and 2022.

 

This year the challenge will focus on gesture synthesis in a dyadic setting, i.e., gestures that depend not only on speech, but also on the behaviour of an interlocutor in a conversation. We invite researchers in academia and industry working on any form of corpus-based non-verbal behaviour generation and gesticulation to submit entries to the challenge, whether their method is driven by rule or machine learning. Participants are provided a large, common dataset of speech (audio+aligned text transcriptions) and 3D motion to develop their systems, and then use these systems to generate motion on given test inputs. The generated motion clips are rendered onto a common virtual agent and evaluated for aspects such as motion quality and appropriateness in a large-scale crowdsourced user study.

 

Data

*********************

The 2023 challenge is based on the Talking With Hands 16.2M dataset (https://github.com/facebookresearch/TalkingWithHands32M). The official challenge dataset also includes additional annotations, and is only available to registered participants.

 

Timeline

*********************

April 1  – Participant registration opens

May 1 – Challenge training dataset released to participants

June 7 – Test input released to participants

June 14 – Deadline for participants to submit generated motion

July 3 – Release of crowdsourced evaluation results to participants

July 14 – Paper submission deadline

August 4 – Author notification

August 11 – Camera-ready papers due

October 9 or 13 – Challenge presentations at ICMI

 

If you would like to receive a notification when challenge registration opens, please follow this link: https://forms.gle/MFEXv84xGL3NrY3d9/.

 

Challenge paper

*********************

Challenge participants are required to submit a paper that describes their system and findings, and will present their work at the Grand Challenge session at ICMI. All accepted papers will be part of the ACM ICMI 2023 main proceedings. Papers that are not accepted will have a chance to be considered for the GENEA Workshop 2023, whose papers are published in the ACM ICMI 2023 companion proceedings.

Back  Top

3-3-28(2023-10-09) ICMI'23 CALL FOR MULTIMODAL GRAND CHALLENGES, Paris, France
ICMI'23 CALL FOR MULTIMODAL GRAND CHALLENGES
============================================
9-13 October 2023, Paris - France
============================================
 
Teams are encouraged to submit proposals for one or more ICMI Multimodal Grand Challenges. The International Conference on Multimodal Interaction (ICMI) is the world's leading venue for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. Identifying the best algorithms and their failure modes is necessary for developing systems that can reliably interpret human-human communication or respond to human input. The availability of datasets and common goals has led to significant development in domains such as computer vision, speech recognition, computational (para-) linguistics, and physiological signal processing, for example. We invite the ICMI community to propose, define, and address the scientific Grand Challenges in our field during the next five years. The goal of the ICMI Multimodal Grand Challenges is to elicit fresh ideas from the ICMI community and to generate momentum for future collaborative efforts. Challenge tasks involving analysis, synthesis, and interaction are all feasible.
 
We invite organizers from various fields related to multimodal interaction to propose and run Grand Challenge events at ICMI 2023. We are looking for exciting and stimulating challenges including but not limited to the following categories:
 
* Dataset-driven challenge. 
This challenge will provide a dataset that is exemplary of the complexities of current and future multimodal problems, and one or more multimodal tasks whose performance can be objectively measured and compared in rigorous conditions. Participants in the Challenge will evaluate their methods against the challenge data in order to identify areas of strengths and weaknesses.
 
* System-driven challenge.
This challenge will provide an interactive problem system (e.g. dialog-based or non-verbal-based) and the associated resources, which can allow people to participate through the integration of specific modules or alternative full systems. Proposers should also establish systematic evaluation procedures.
 
Prospective organizers should submit a five-page maximum proposal containing the following information:
1.    Title
2.    Abstract appropriate for possible Web promotion of the Challenge
3.    Distinctive topics to be addressed and specific goals
4.    Detailed description of the Challenge and its relevance to multimodal interaction
5.    Length (full day or half day)
6.    Plan for soliciting participation and list of potential participants
7.    Description of how submissions to the challenge will be evaluated, and a list of proposed reviewers
8.    Proposed schedule for releasing datasets (if applicable) and/or systems (if applicable) and receiving submissions.
9.    Short biography of the organizers (preferably from multiple institutions)
10. Funding source (if any) that supports or could support the challenge organization
11. Draft call for papers: affiliations and email address of the organizers; summary of the Grand Challenge; list of potential Technical Program Committee members and their affiliations, important dates
 
Proposals will be evaluated based on originality, ambition, feasibility, and implementation plan. A Challenge with dataset(s) or system(s) that has had pilot results to ensure its representativity and suitability to the proposed task will be given preference for acceptance; an additional 1 page description must be attached in such case. Continuation of or variants on previous ICMI grand challenges are welcome, though we ask for submissions of this form to highlight the number of participants that attended during the previous year and describe what changes (if any) will be made from the previous year.
 
The ICMI conference organizers will offer support with basic logistics, which includes rooms and equipment to run the challenge workshop, coffee breaks synchronized with the main track, etc.
 
Important Dates and Contact Details
===================================
 
Proposals due: February 3, 2023
Proposal notification: February 10, 2023
Paper camera-ready: August 13, 2023
Grand challenge date: October 9 or 14, 2023
 
Proposals should be emailed to the ICMI 2023 Multimodal Grand Challenge Chairs, Sean Andrist and Fabien Ringeval:  icmi2023-grand-challenge-chairs@acm.org
 
Prospective organizers are also encouraged to contact the co-chairs if they have any questions.
Back  Top

3-3-29(2023-10-09) International Workshop on Multimodal Conversational Agents for Individuals with Neurodevelopmental Disorders, Paris, France

Website: https://sites.google.com/mit.edu/mcapnd2023

Submission deadline: July 23rd, 2023

Submission platform: https://openreview.net/group?id=ACM.org/ICMI/2023/MCAPND (accepted papers will be indexed by ACM Digital Library)


Dear colleagues,

The International Workshop on Multimodal Conversational Agents for Individuals with Neurodevelopmental Disorders is now accepting submissions. The workshop will be held in conjunction with the ACM International Conference on Multimodal Interaction (ICMI) on 9th-13th October 2023 in Paris. We aim to bring together researchers and practitioners in the field of multimodal conversational agents for individuals with NDD to foster a multidisciplinary discourse on the latest research, technologies, and applications in the field. More info can be found on the workshop website: https://sites.google.com/mit.edu/mcapnd2023

We hope this workshop will be a venue for researchers and practitioners to exchange ideas, share findings, and identify future research directions.
We look forward to your participation!

Best regards,
The organization committee
(Fabio Catania - MIT
Tanya Talkar - MIT and Harvard University
Franca Garzotto - Politecnico di Milano
Satrajit Ghosh - MIT and Harvard University
Thomas Quatieri - MIT and Harvard University
Benjamin Cowan - University College Dublin)


*****************************

Topics
Topics of interest include, but are not limited to:
-Design and development of multimodal conversational agents for individuals with NDD
-Evaluating the effectiveness of these systems in real-world settings
-Understanding the specific needs of individuals with NDD and tailoring the systems accordingly
-Use of virtual reality and artificial intelligence in the development of these systems
-Voice analysis and perception
-Social and ethical implications of using these systems

Submission
The deadline for submitting papers is July 23rd, 2023.
For online paper submissions, please click on the following link:
https://openreview.net/group?id=ACM.org/ICMI/2023/MCAPND
All papers submitted to the workshop should follow the same guidelines as papers published in the proceedings of the ACM ICMI conference:
https://icmi.acm.org/2023/guidelines-for-authors/
We accept two different paper formats: long paper (up to 14 pages) and short paper (up to 7 pages). Papers should be anonymous, and authors should not include any identifiable information in the paper, including acknowledgments and references.

Review
All submissions will be double-blindly peer-reviewed by at least two independent reviewers.

Publication
Workshop papers will be indexed by ACM Digital Library in an adjunct proceeding, and a short workshop summary by the organizers will be published in the main conference proceedings.

Back  Top

3-3-30(2023-10-09)ACM ICMI 2023 CALL FOR DOCTORAL CONSORTIUM CONTRIBUTIONS, Paris, France
========================================================
ACM ICMI 2023 CALL FOR DOCTORAL CONSORTIUM CONTRIBUTIONS
========================================================
9-13 October 2023, Paris - France
https://icmi.acm.org/2023/doctoral-consortium/
========================================================
 
The goal of the ICMI Doctoral Consortium (DC) is to provide PhD students with an opportunity to present their work to a group of mentors and peers from a diverse set of academic and industrial institutions, to receive feedback on their doctoral research plan and progress, and to build a cohort of young researchers interested in designing and developing multimodal interfaces and interaction. We invite students from all PhD granting institutions who are in the process of forming or carrying out a plan for their PhD research in the area of designing and developing multimodal interfaces.
 
Who should apply?
-----------------
While we encourage applications from students at any stage of doctoral training, the doctoral consortium will benefit most the students who are in the process of forming or developing their doctoral research. These students will have passed their qualifiers or have completed the majority of their coursework, will be planning or developing their dissertation research, and will not be very close to completing their dissertation research. Students from any PhD granting institution whose research falls within designing and developing multimodal interfaces and interaction are encouraged to apply.
 
Why should you attend?
----------------------
The DC provides an opportunity to build a social network that includes the cohort of DC students, senior students, recent graduates, and senior mentors. Not only is this an opportunity to get feedback on research directions, it is also an opportunity to learn more about the process and to understand what comes next. We aim to connect you with a mentor who will give specific feedback on your research. We specifically aim to create an informal setting where students feel supported in their professional development.
 
Important Dates
---------------
Submission deadline    June 18, 2023
Notifications        July 24, 2023
Camera-ready        August 6, 2023
 
Submission Guidelines
---------------------
Graduate students pursuing a PhD degree in a field related to ICMI are eligible to apply for the Doctoral Consortium (DC) and should submit the following materials:
 
1. Extended Abstract: A description of the PhD research plan and progress. Extended abstracts can be a maximum of four pages, although references can extend to a fifth page if needed. They should follow the same outline, details, and format of the ICMI Short Papers (https://icmi.acm.org/2023/guidelines-for-authors/). However, unlike short papers, DC submissions will not be anonymous. Be sure to include:
* The key research questions and motivation of the student’s research
* Background and related work that informs the student’s research
* A statement of hypotheses or a description of the scope of the technical problem
* The research plan, outlining stages of system development or series of studies
* The research approach and methodology
* Research results to date (if any) and a description of remaining work
* A statement of research contributions to date (if any) and expected contributions of the PhD work
 
2. Advisor Letter: A one-page letter of nomination from the student’s PhD advisor, which should focus on the student’s PhD plan and how the Doctoral Consortium event might contribute to the student’s PhD training and research.
 
3. Curriculum Vitae: A two-page CV describing the student’s background and work.
 
All materials should be prepared in PDF format and submitted through the ICMI submission system.
 
Process
-------
* Submission format: Four-page extended abstract using the ACM format (https://icmi.acm.org/2023/guidelines-for-authors/)
* Submission system: Precision Conference System 
(https://new.precisionconference.com/submissions/icmi23a)
* Selection process: Peer-Reviewed
* Presentation format: Talk on consortium day and participation in the conference poster session
* Proceedings: Extended abstracts published in conference proceedings and ACM Digital Library
Back  Top

3-3-31(2023-10-29) ACM Multimedia 2023 Computational Paralinguistics Challenge (ComParE) - Emotion Share and Requests
Call for Participation:

ACM Multimedia 2023 Computational Paralinguistics Challenge (ComParE) - Emotion Share and Requests http://www.compare.openaudio.eu/2023-2/
 
The ACM Multimedia 2023 Computational Paralinguistics ChallengE (ComParE) is an open Grand Challenge dealing with states and traits of speakers as manifested in their speech signal’s properties and beyond. In this 14th edition, we introduce two new Sub-Challenges:

	• Emotion Share Sub-Challenge,
	• Requests Sub-Challenge

Sub-Challenges allow contributors to find their own features with their own machine learning algorithm. Participants have five trials on the test set per Sub-Challenge. Participation has to be accompanied by a paper presenting the results that undergoes the ACM peer-review.

Contributions using the provided or equivalent data are sought for (but not limited to):

	• Participation in a Sub-Challenge
	• Contributions around the Challenge topics

Results of the Challenge and Prizes will be presented at ACM Multimedia 2023 in Ottawa between 29 October and 3 November 2023.

Organizers

  General Chairs:
    - Björn Schuller (University of Augsburg, Germany / Imperial College London, UK / audEERING)
    - Anton Batliner (University of Augsburg, Germany) 
    - Shahin Amiriparian (University of Augsburg, Germany)
    - Alexander Barnhill (FAU Erlangen-Nuremberg Erlangen, Germany)
    - Alan S. Cowen (Hume.AI, USA)
    - Claude Montacié (Sorbonne University, France)

  Data Chairs:
    - Alice Baird (Hume.AI, USA)
    - Nikola Lackovic (Malakoff Humanis, France)
 [IMPORTANT] Please note, in case of participation: for the Requests sub-challenge, you have to sign the EULA the same way as in previous years: it is mandatory that it is signed by a *permanent* member of the staff, not  for instance by a student!

For the Emotion Share Sub-Challenge, EULAs and data will be handled by Hume.AI. For more information visit

http://www.compare.openaudio.eu/2023-2/

Back  Top

3-3-32(2023-10-29) Cf participation : the 2nd Conversational Head Generation Challenge @ ACM Multimedia 2023

Call for Participation: the 2nd Conversational Head Generation Challenge @ ACM Multimedia 2023

We are pleased to invite multimedia researchers to participate the 2nd 'Conversational Head Generation Challenge,' co-located with ACM Multimedia 2023.

About the Challenge:
Conversational head generation highlights both the talking and listening roles generation in an interactive face-to-face conversation. Generating vivid talking head video and proper responsive listening behavior are both essential for digital humans during face-to-face human-computer interaction. More details can be found via: https://vico.solutions/challenge/2023

This distinctive challenge is based on the newly extended ViCo dataset (https://vico.solutions/vico), composed with conversation videos between real humans. Our aim is to bring face-to-face interactive head video generation into a visual competition through this challenge. This year, two tracks will be hosted:
- Talking head video generation (audio-driven speaker video generation) conditioned on the identity and audio signals of the speaker.
- Responsive Listening Head Video Generation (video-driven listener video generation) conditioned on the identity of the listener and with real-time responses to the speaker's behaviors.

As a starting point for the participates, we also provide an open-source baseline method (https://github.com/dc3ea9f/vico_challenge_baseline) that includes audio/video-driven head generation, rendering, and scripts for 13 evaluation metrics.

Important Dates:
- Dataset available for download (training set): March 27th.
- Challenge launch date: April 3rd.
- Paper submissions deadline: July 14th.
- Top submissions will have the opportunity to present their work at the workshop during ACM Multimedia 2023. We encourage all participating teams to submit a paper (up to 4 pages + up to 2 extra pages for references only) briefly describing their solution.

Find out more about the challenge:
- Challenge mainpage (including challenge registration, online evaluation results): https://vico.solutions/challenge/2023
- Challenge page at ACM MM 2023: https://www.acmmm2023.org/grand-challenges-2/

We believe this challenge would greatly benefit from your knowledge and expertise. Please don't hesitate to reach out if you have any questions or require further information.

Contact: Yalong Bai, Mohan Zhou, Wei Zhang
vico-challenge@outlook.com

The organizing team
March 2023

Back  Top

3-3-33(2023-10-29?) 6th International Workshop on Multimedia Content Analysis in Sports (MMSports'23) @ ACM Multimedia, Ottawa, Canada

Call for Papers

-------------------

6th International Workshop on Multimedia Content Analysis in Sports (MMSports'23) @ ACM Multimedia, Oct 29 – Nov 3, 2023, Ottawa, Canada

 

We'd like to invite you to submit your paper proposals for the 6th International Workshop on Multimedia Content Analysis in Sports to be held in Ottawa, Canada together with ACM Multimedia 2023. The ambition of this workshop is to bring together researchers and practitioners from many different disciplines to share ideas and methods on current multimedia/multimodal content analysis research in sports. We welcome multimodal-based research contributions as well as best-practice contributions focusing on the following (and similar, but not limited to) topics:

- annotation and indexing in sports

- tracking people/ athlete and objects in sports

- activity recognition, classification, and evaluation in sports

- 3D scene and motion reconstruction in sports

- event detection and indexing in sports

- performance assessment in sports

- injury analysis and prevention in sports

- data driven analysis in sports

- graphical augmentation and visualization in sports

- automated training assistance in sports

- camera pose and motion tracking in sports

- brave new ideas / extraordinary multimodal solutions in sports

- personal virtual (home) trainers/coaches in sports

- datasets in sports

- graphical effects in sports

- alternative sensing in sports (beyond the visible spectrum)

- multimodal perception in sports

- exploiting physical knowledge in learning systems for sports

- sports knowledge discovery

- narrative generation and narrative analysis in sports

- mobile sports application

- multimedia in sports beyond video, including 3D data and sensor data

 

Submissions can be of varying length from 4 to 8 pages, plus additional pages for the reference pages. There is no distinction between long and short papers, but the authors may themselves decide on the appropriate length of their paper. All papers will undergo the same review process and review period.

 

Please refer to the workshop website for further information: 

http://mmsports.multimedia-computing.de/mmsports2023/index.html

 

IMPORTANT DATES

Submission Due:                           14 July 2023 

Acceptance Notification:             30 July 2023

Camera Ready Submission:         12 August 2023 

Workshop Date:                            TBA; either Oct 29, 30 or Nov 2, 2023

 

 

Challenges

--------------

This year again, MMSports carries out a competition where participants can compete on state-of-the-art problems applied to real-world sport specific data. The competition is made of individual challenges, each of which is sponsored by SportRadar with a US$1,000.00 prize. Each challenge comes with a toolkit describing the task, the dataset and metrics on which participants will be evaluated. This year, the second edition has 3 challenges: 2 on basketball and 1 on cricket! More information on the challenges can be found at http://mmsports.multimedia-computing.de/mmsports2023/challenge.html.

 

ACM MMSports’23 Chairs: Thomas Moeslund, Rainer Lienhart and Hideo Saito

 

Back  Top

3-3-34(2023-11-18 ) CfP The 2nd International Conference on Tone-and-Intonation (TAI 2023), Singapore
========================================
(2023-11-18) TAI 2023, Singapore, Call for Papers
========================================

The 2nd International Conference on Tone and Intonation (TAI 2023), Singapore, November 18-21, 2023

Theme:  East Meets West: Languages and Approaches

Website:  http://www.tai2023.org


We are delighted to announce the upcoming 2nd International Conference on Tone and Intonation (TAI 2023), to be held in the vibrant city of Singapore during 18-21 November 2023. Jointly sponsored by the International Speech Communication Association (ISCA) and the International Phonetic Association (IPA), this event is organized by the Chinese and Oriental Languages Information Processing Society (COLIPS) and the Pattern Recognition and Machine Intelligence Association (PREMIA), with valuable support from the National University of Singapore (NUS) and Nanyang Technological University (NTU).

The theme of this year’s conference is “East Meets West: Languages and Approaches”. Building on this theme, we aim to foster a dialogue between Eastern and Western perspectives in the study of tone and intonation. We cordially invite you to participate and contribute to this unique academic discourse. We especially encourage submissions that explore and compare Eastern and Western languages and methodologies, thus contributing to a more comprehensive and global understanding of tone and intonation.

Submissions related to phonetic and phonological analyses of tone and intonation are eagerly anticipated at TAI 2023. We welcome contributions on various topics, including, but not limited to, the production and perception of tone and intonation, the semantics and pragmatics of tone and intonation, the acquisition and teaching of tone and intonation in L1 and L2, and cross-linguistic comparisons of tone and intonation. In line with our theme, we particularly encourage submissions that explore the intersection of Eastern and Western approaches to these topics. In the spirit of interdisciplinarity, we also invite researchers from adjacent fields to submit papers on tone and intonation, further broadening the scope and enriching the discussions at the conference.

Prospective authors are invited to submit a 2-page abstract (1-page text and 1-page tables/ figures/ references) through our paper submission system. After the conference, authors can submit an optional 5-page full paper for inclusion in the ISCA Proceedings.

Conference Timeline:

· 01 May 2023         Online abstract submission open
· 30 Jun 2023           Abstract submission deadline
· 15 Aug 2023          Notification of abstract acceptance
· 20 Sep 2023          Early bird registration deadline
· 18-21 Nov 2023    Conference in Singapore
· 31 Jan 2024           Submission of the revised abstract and an optional full paper
 
 

The deadline for abstract submissions has been extended until 10 July 2023. Authors have the opportunity to update their submissions until 17 July 2023. Please find the detail at www.tai2023.org.

 

We are very honored to have invited the following keynote speakers:

   - Jennifer Cole, Northwestern University, USA

   - James Kirby, Ludwig-Maximilians-Universität München, Germany

   - Ying-Ying Tan, Nanyang Technological University, Singapore

 
 
 
Back  Top

3-3-35(2023-11-29) 10e Rencontres des Jeunes Chercheurs en Parole, Grenoble, France
Logo_RJCP_transp(1).png

RJCP 1er appel à posters

https://rjcp-2023.sciencesconf.org/

10e Rencontres des Jeunes Chercheurs en Parole (RJCP-2023)

Du 29/11 au 01/12 2023, Grenoble, France 

======================================================================

 

Les RJCP sont des journées marrainées par l'Association Francophone de la Communication Parlée (https://www.afcp-parole.org/) qui offrent aux jeunes chercheurs l’occasion de se rencontrer, de présenter leurs travaux, d'élargir leurs connaissances et d’échanger sur les divers domaines de la Parole. Nous serons heureux d'accueillir tous les intéressés par des échanges autour de la recherche en parole, jeunes chercheurs ou non  (dans la limite des places disponibles). Il y aura au progamme des sessions posters, des conférences, des formations et des visites de plateformes expérimentales dédiées à la parole. Une table ronde consacrée aux poursuites de carrière/perspectives professionnelles après la thèse vous sera également proposée. Plus d’informations seront affichées sur notre site Web au cours de l'été (à découvrir ici) !

DATES IMPORTANTES

  • 15/06/2023 - Ouverture de l’appel à posters  

  • 08/09/2023 - Date limite d'envoi des résumés

  • 22/09/2023 - Notification d’acceptation aux auteurs

  • 25/09/2023 - 1ère phase d’ouverture des inscriptions 

  • 30/10/2023 - Fermeture des inscriptions (celle-ci pourra intervenir plus tôt si nous atteignons rapidement le nombre maximum de personnes que nous pouvons accueillir)

  • 29/11/2023 - 01/12/2023 - RJCP

THÈMES

Nous invitons les communications sur les thèmes suivants (liste non exhaustive):

* Acoustique de la parole
* Acquisition de la parole et du langage 
* Analyse, codage et compression de la parole
* Applications à composantes orales (dialogue, indexation, etc)
* Apprentissage d’une langue seconde
* Communication multimodale 
* Dialectologie 
* Évaluation, corpus et ressources 
* Langues en danger 
* Modèles de langage 
* Parole audio-visuelle
* Pathologies de la parole 
* Phonétique et phonologie 
* Phonétique clinique 
* Production / Perception de la parole
* Prosodie
* Psycholinguistique 
* Reconnaissance et compréhension de la parole 
* Reconnaissance de la langue
* Reconnaissance du locuteur
* Signaux sociaux, sociophonétique 
* Synthèse de la parole…

APPEL À CONTRIBUTIONS

Masterants, doctorants, post-doctorants*, industriels* et jeunes chercheurs en recherche d'emploi* sont invités à soumettre un résumé de 300 mots maximum présentant leurs travaux à venir, en cours ou terminés pour la session posters: https://rjcp-2023.sciencesconf.org/submission/submit 

Le nombre de pages de références n’est pas restreint. Les templates seront à retrouver sous peu sur notre site, dans la section “Appel à Communications”.

Le Comité d’organisation RJCP pourra prendre en charge l’impression des posters pour les personnes non affiliées à un laboratoire de recherche. Cette décision sera prise après discussion. Pour en faire la demande, veuillez nous contacter à jcparole@gmail.com.


*Jusqu’à 3 ans après la thèse


Nous espérons vous voir nombreux,

Le comité d'organisation

Back  Top

3-3-36(2023-11-29) SPECOM 2023, Hubli-Dharwad, India



Announcing the SPECOM 2023 Call for Papers! 

 

The Call for Papers for SPECOM 2023 is now open! The 25th  International Conference on Speech and Computer (SPECOM) will be held from 29th November- 1st December 2023 in Hubli-Dharwad, India. 

This flagship conference will offer a comprehensive technical program presenting all the latest developments in research and technology for speech processing and its applications. Featuring world-class oral and poster sessions, plenaries and perspective exhibitions, demonstrations, tutorials,  and satellite workshops, it is expected to attract leading researchers and global industry figures, providing a great networking opportunity. Moreover, exceptional papers and contributors will be selected and recognized by SPECOM.

Website Link: https://iitdh.ac.in/specom-2023/

Call for papers PDF is available here.

Special attractions for commemorating Silver Jubilee of SPECOM

  • Students Special Session

  • Special Session on Speech Processing for Under-Resource Languages

  • Special Session on Industrial Speech and Language Technology

  • Satellite Workshop on “Speaker and Language Identification, Verification and Diarization” @ Goa

Technical Scope:


We invite submissions of original unpublished technical papers on topics including but not limited to:


  • Affective computing

  • Audio-visual speech processing

  • Corpus linguistics

  • Computational paralinguistics

  • Deep learning for audio processingVoice

  • Forensic speech investigations

  • Human-machine interaction

  • Language identification

  • Multichannel signal processing

  • Multimedia processing

  • Multimodal analysis and synthesis

  • Sign language processing

  • Speaker recognition

  • Speech and language resources

  • Speech analytics and audio mining

  • Speech and voice disorders

  • Speech-based applications

  • Speech driving systems in robotics

  • Speech enhancement

  • Speech perception

  • Speech recognition and understanding

  • Speech synthesis

  • Speech translation systems

  • Spoken dialogue systems

  • Spoken language processing

  • Text mining and sentiment analysis

  • Virtual and augmented reality

  • Voice assistants



Organizers:


  • General chairs:

    • Prof. Yegnanarayana Bayya (IIIT Hyderabad)

    • Prof. Shyam S Agrawal (KIIT Gurugram)

  • Technical Program Committee Chairs:

    • Prof. Rajesh M. Hegde (IIT Dharwad)

    • Prof. Alexey Karpov (SPC RAS St. Petersburg)

    • Prof. K. Samudravijaya (KL University)

    • Dr. Deepak K. T. (IIIT Dharwad)

  • Organizing Commiittee:

    • Prof. S R M Prasanna (IIT Dharwad)

    • Prof. Suryakanth V gangashetty (KL University)

Important dates:

  • Paper Submission Starts: 15 May 2023

  • Paper Submission Deadline: 31 July 2023

  • Paper Acceptance Notification: 8 September 2023 

  • Camera Ready Paper Deadline: 24 September 2023

  • Early Bird Registration Deadline: 24 September 2023 

  • Author Registration Deadline: 20 March 2023 

  • Conference date: 29 November - 1 December 2023

  • Satellite workshop: 2 December 2023

Back  Top

3-3-37(2023-12-11) Journée commune AFIA-THL / AFCP --- 'Extraction de connaissances interprétables pour l'étude de la communication parlée' , LIA, Avignon, France
Journée commune AFIA-THL / AFCP --- 'Extraction de connaissances interprétables pour l'étude de la communication parlée' - 
le lundi 11 décembre 2023 au Laboratoire d’Informatique d’Avignon.
 
Appel à communications orales - 
 
L'Association Française pour l'Intelligence Artificielle (AFIA), au travers de son collège Technologies du Langage Humain (TLH), organise avec l'Association Francophone de la Communication Parlée (AFCP), une première journée commune sur le thème 'Extraction de connaissances interprétables pour l'étude de la communication parlée' le lundi 11 décembre 2023 sur Avignon.
 
L'objectif de cette journée est de réunir chercheur.euse.s dont l'objet d'étude est la communication parlée, que ce soit du point de vue des Sciences Humaines et Sociales (SHS) ou du Traitement Automatique des Langues et de l'Intelligence Artificielle. Il s'agira au cours de cette journée d'aborder la question de l'extraction de connaissances interprétables dans le signal de parole par le biais d'approches automatiques, en particulier basées sur des apprentissages profonds, pour l'étude de la communication parlée au sens large. Ces études pourront porter sur des thématiques comme l'analyse de la parole dans le domaine de la phonétique ou de la linguistique, la caractérisation du locuteur pour des tâches de reconnaissance, de segmentation et regroupement en locuteurs, de comparaison de voix (criminalistique), l'analyse de la voix/parole pathologique, l'analyse des informations para-linguistiques (autre que le locuteur) comme la parole expressive, les émotions, les accents régionaux, etc., l'étude de comportements cognitifs autour de l'acquisition de la parole, ... Côté Traitement Automatique des Langues et de l'Intelligence Artificielle, les thèmes autour des modèles auto-supervisés de représentation de la parole, de l'explicabilité des modèles, de l'évaluation de l'interprétabilité et de la pertinence des explications, des boucles interactives avec l'utilisateur, pourront également être abordés.
 
Cette journée sera ainsi l'occasion de montrer des approches automatiques déjà existantes d'extractions de connaissances interprétables, pour répondre aux besoins des chercheur.euse.s en SHS mais également d'exprimer de la part de ces derniers, de nouveaux besoins. 
Elle s’adresse aussi bien aux jeunes chercheur.euse.s qu’aux chercheur.euse.s plus avancé.e.s du domaine. Elle est ouverte à la présentation de travaux à différents stades d’avancement voire à la présentation de projets de recherche en voie d'être lancés.
 
Outre l'intervention d'un conférencier invité et la tenue d'une discussion animée en fin de session, la journée sera rythmée par des communications orales de durée variable (de 10 à 20mn) en fonction des soumissions reçues.
 
Soumissions
Les propositions de communications orales sont attendues sous la forme d'un résumé d’une page environ au format texte comprenant un titre, une liste d'auteur.e.s, une liste de mots-clés et un résumé du contenu de la présentation proposée. 
 
Elles devront être envoyées au format pdf par mail à Marie Tahon (marie.tahon@univ-lemans.fr) et Corinne Fredouille (corinne.fredouille@univ-lemans.fr)
 
Dates importantes
 
  • SAVE THE DATE et appel à soumission : 25/05/2023 
  • Date limite des soumissions :  15/09/2023
  • Notification aux auteur.e.s : 29/09/2023
  • Organisation de la journée : 11/12/2023
 
 
Co-organisation et Comité scientifique
La journée est co-organisée par Marie Tahon et Corinne Fredouille du collège TLH de l'AFIA et Maëva Garnier et Olivier Perrotin de l'AFCP et soutenue par un comité scientifique constitué de membres des deux institutions.
 
 
Programme et inscriptions
Le programme et le formulaire d'inscription seront disponibles prochainement.
L'inscription à la journée sera gratuite mais obligatoire.
Back  Top

3-3-38(2023-12-16) The 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2023), Taipeh, Taiwan

The 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2023) will be held on December 16 – 20, 2023, at Taipei, Taiwan. The workshop is held every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding. The conference will be an 'in-person' event (with a virtual component for those that can not attend physically). The event will be held in the Beitou Area, the town of hot springs of Taipei. We encourage all to join us for this wonderful event in Taiwan; looking forward to seeing you all in Taiwan. The paper submission deadline is July 3rd, 2023.
http://www.asru2023.org/

Back  Top

3-3-39(2024-04-14) Call for Satellite Workshops, ICASSP 2024, SEOUL, South Korea
Submit a proposal for ICASSP 2024 by 7 July 2023.
 

ICASSP 2024: Call for Satellite Workshops

On behalf of the Organizing Committee, it is our pleasure to invite you to the 2024 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2024), Seoul, Korea, 14-19 April 2024, with the theme 'Signal Processing: The Foundation for True Intelligence'.

 

The organizing committee of ICASSP 2024 is currently inviting proposals for Satellite Workshops.

 

Call for Satellite Workshops

Satellite Workshops aim to enrich the conference program, attract a wider audience, and enhance inclusivity for students and professionals. The ICASSP Satellite Workshops will be half- or full-day events and will take place the day before or after the main conference technical program at the conference venue. Their main emphasis will lie on clearly focused and emerging topics that are not specifically covered in the main conference and/or enable thematic synergies between the IEEE Signal Processing and other related societies. 

 

Important Dates:

7 July 2023: Workshop Proposal Deadline

28 July 2023: Workshop Proposal Acceptance Notification

Late November 2023: Workshop Paper Submission Deadline

Late January 2024: Workshop Paper Acceptance Notification

Early February 2024: Workshop Camera Ready Paper Deadline

 

Coming soon! More information to be announced for ICASSP 2024 including Call for Papers, Grand Challenges, and more.

Back  Top

3-3-40(2024-05-13) 13th International Seminar on Speech Production, Autrans, France
13th International Seminar on Speech Production, 13-17 May 2024, in Autrans, France
 
It is time for the next International Seminar on Speech Production.
 
After the launch in 1988 in Grenoble, followed by in Leeds (1990), Old Saybrook (1993), Autrans (1996), Kloster Seeon (2000), Sydney (2003), Ubatatuba (2006), Strasbourg (2008), Montreal (2011), Cologne (2014), Tianjin (2017) and virtually in in 2020, the 13th ISSP will come back (close) to Grenoble.
 
After a very successful virtual ISSP in 2020 (Haskins Labs), we are ready again for an in-person meeting in a very beautiful location in the mountains of Autrans (of course we will provide an option to attend virtually).
Take your calendars and mark the 13-17 May 2024 for the 13th International Seminar on Speech Production co-organized by several laboratories in France
 
More information including the website and important dates will be provided soon.
 
We are looking forward to meeting you in Autrans in 2024!
 
The organizing committee, Cécile Fougeron & Pascal Perrier together with Jalal Al-Tamimi, Pierre Baraduc, Véronique Boulanger, Mélanie Canault, Maëva Garnier, Anne Hermes, Fabrice Hirsch, Leonardo Lancia, Yves Laprie, Yohann Meynadier, Slim Ouni, Rudolph Sock, Béatrice Vaxelaire
 
Follow us on twitter @issp2024!
 

Claire PILLOT-LOISEAU

. Maître de Conférences HDR en Phonétique
. Responsable du DU de Phonétique Appliquée à la Langue Française (DUPALF)

Laboratoire de Phonétique et Phonologie UMR 7018 (LPP)
Université Sorbonne Nouvelle, département Institut de Linguistique et de Phonétique Générales et Appliquées (ILPGA)

. 4, rue des Irlandais, 75005 PARIS (Laboratoire)  
. 8, Avenue de Saint Mandé, 75012, PARIS (Université)


!CONFERENCE ANNOUNCEMENT!
The next 
International Seminar on Speech Production (ISSP2024) will be organized in less than a year, from May 13 to May 17 2024, in Autrans, France, supported by several laboratories in France working within speech production research. Topics of interest for this conference cover different aspects of speech production, including articulation, acoustics, neural substrates, motor control, disorders, and their links to perception, communication, development and language.

 

!KEYNOTE SPEAKERS!

We are delighted to announce that the conference will be organized around 6 keynotes illustrating the diversity of research topics in - and out of - the field of Speech Production: María Florencia Assaneo (UNAM, Mexico), Adrien Meguerditchian (Aix-Marseille U., France), Doris Mücke (U. of Cologne, Germany), Caroline Niziolek (U. Wisconsin-Madison, USA), Sophie Scott (UCL, UK), Jason Shaw (Yale U., USA).

 

!IMPORTANT DATES!

December 15, 2023 2-page abstract submission deadline

February 1, 2024 Notification of acceptance

April 15, 2024 optional full 4-page paper submission deadline

May 13-17, 2024 ISSP2024 in Autrans, France

 

For updated information on our invited speakers, on the scientific and organizing committee, and more practical information, please visit regularly our conference website: https://issp24.sciencesconf.org/ and follow us on Twitter @issp2024!

 

!SURVEY!

HYBRID OR NOT?

In the spirit of most editions of the conference, we have chosen to have a unity of place for the scientific exchanges and accommodation, and this will be in a conference center in the mountains near Grenoble.

Although we prefer all participants to be on-site during the conference, we are considering the possibility to organize it in a hybrid mode. Thus, could you answer as soon as possible the survey (if you plan to attend on-site or if you plan to attend remotely). This gives us a better idea of how we will proceed:https://framaforms.org/issp24-survey-1685914178

 

Back  Top

3-3-41(2024-05-20)CfP LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, Torino, Italy

LREC-COLING 2024
The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation
Lingotto Conference Centre - Torino (Italy)

20-25 May, 2024

https://lrec-coling-2024.lrec-conf.org

 

Twitter: @LrecColing2024

First Call for papers 

Two international key players in the area of computational linguistics, the ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL), are joining forces to organize the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) to be held in Torino, Italy on 20-25 May, 2024.

IMPORTANT DATES

(All deadlines are 11:59PM UTC-12:00 (“anywhere on Earth”)

  • 22 September 2023: Paper anonymity period starts
  • 13 October 2023: Final submissions due (long, short and position papers)
  • 13 October 2023: Workshop/Tutorial proposal submissions due
  • 22–29 January 2024: Author rebuttal period
  • 5 February 2024: Final reviewing
  • 19 February 2024: Notification of acceptance
  • 25 March 2024: Camera-ready due
  • 20-25 May 2024: LREC-COLING2024 conference

 SUBMISSION TOPICS

LREC-COLING 2024 invites the submission of long and short papers featuring substantial, original, and unpublished research in all aspects of natural language and computation, language resources (LRs) and evaluation, including spoken and sign language and multimodal interaction. Submissions are invited in five broad categories: (i) theories, algorithms, and models, (ii) NLP applications, (iii) language resources, (iv) NLP evaluation and (v) topics of general interest. Submissions that span multiple categories are particularly welcome.

(i) Theories, algorithms, and models

  • Discourse and Pragmatics
  • Explainability and Interpretability of Large Language Models
  • Language Modeling
  • CL/NLP and Linguistic Theories
  • CL/NLP for Cognitive Modeling and Psycholinguistics
  • Machine Learning for CL/NLP
  • Morphology and Word Segmentation
  • Semantics
  • Tagging, Chunking, Syntax and Parsing
  • Textual Inference

(ii) NLP applications

  • Applications (including BioNLP and eHealth, NLP for legal purposes, NLP for Social Media and Journalism, etc.)
  • Dialogue and Interactive Systems
  • Document Classification, Topic Modeling, Information Retrieval and Cross-Lingual Retrieval
  • Information Extraction, Text Mining, and Knowledge Graph Derivation from Texts
  • Machine Translation for Spoken/Written/Sign Languages, and Translation Aids
  • Sentiment Analysis, Opinion and Argument Mining
  • Speech Recognition/Synthesis and Spoken Language Understanding
  • Natural Language Generation, Summarization and Simplification
  • Question Answering
  • Offensive Speech Detection and Analysis
  • Vision, Robotics, Multimodal and Grounded Language Acquisition

(iii) Language resource design, creation, and use: text, speech, sign, gesture, image, in single or multimodal/multimedia data

  • Guidelines, standards, best practices and models for LRs, interoperability
  • Methodologies and tools for LRs construction, annotation, and acquisition
  • Ontologies, terminology and knowledge representation
  • LRs and Semantic Web (including Linked Data, Knowledge Graphs, etc.)
  • LRs and Crowdsourcing
  • Metadata for LRs and semantic/content mark-up
  • LRs in systems and applications such as information extraction, information retrieval, audio-visual and multimedia search, speech dictation, meeting transcription, Computer-Aided Language Learning, training and education, mobile communication, machine translation, speech translation, summarisation, semantic search, text mining, inferencing, reasoning, sentiment analysis/opinion mining, (speech-based) dialogue systems, natural language and multimodal/multisensory interactions, chatbots, voice-activated services, etc.
  • Use of (multilingual) LRs in various fields of application like e-government, e-participation, e-culture, e-health, mobile applications, digital humanities, social sciences, etc.
  • LRs in the age of deep neural networks
  • Open, linked and shared data and tools, open and collaborative architectures
  • Bias in language resources
  • User needs, LT for accessibility

(iv) NLP evaluation methodologies

  • NLP evaluation methodologies, protocols and measures
  • Benchmarking of systems and products
  • Evaluation metrics in Machine Learning
  • Usability evaluation of HLT-based user interfaces and dialogue systems
  • User satisfaction evaluation

(v) Topics of general interest

  • Multilingual issues, language coverage and diversity, less-resourced languages
  • Replicability and reproducibility issues
  • Organisational, economical, ethical and legal issues
  • Priorities, perspectives, strategies in national and international policies
  • International and national activities, projects and initiatives

 

LREC-COLING 2024 invites high-quality submissions written in English. Submissions of three forms of papers will be considered:

A. Regular long papers - up to eight (8) pages maximum*, presenting substantial, original, completed, and unpublished work.

B. Short papers - up to four (4) pages*, describing a small focused contribution, negative results, system demonstrations, etc.

C. Position papers - up to eight (8) pages*, discussing key hot topics, challenges and open issues, as well as cross-fertilization between computational linguistics and other disciplines.

* Excluding any number of additional pages for references, ethical consideration, conflict-of-interest, as well as data and code availability statements.

Appendices or supplementary material will be allowed ONLY in the final, camera-ready version, but not during submission, as papers should be reviewed without the need to refer to any supplementary materials.

Linguistic examples, if any, should be presented in the original language but also glossed into English to allow accessibility for a broader audience. 

Note that paper types are decisions made orthogonal to the eventual, final form of presentation (i.e., oral versus poster).

AUTHOR RESPONSIBILITIES

Papers must be of original, previously-unpublished work. Papers must be anonymized to support double-blind reviewing. Submissions thus must not include authors’ names and affiliations. The submissions should also avoid links to non-anonymized repositories: the code should be either submitted as supplementary material in the final version of the paper, or as a link to an anonymized repository (e.g., Anonymous GitHub or Anonym Share). Papers that do not conform to these requirements will be rejected without review.

If the paper is available as a preprint, this must be indicated on the submission form but not in the paper itself. In addition, LREC-COLING 2024 will follow the same policy as ACL conferences establishing an anonymity period during which non-anonymous posting of preprints is not allowed.

More specifically, direct submissions to LREC-COLING 2024 may not be made available online (e.g. via a preprint server) in a non-anonymized form after September 22, 11:59PM UTC-12:00 (for arXiv, note that this refers to submission time).

Also included in that policy are instructions to reviewers to not rate papers down for not citing recent preprints. Authors are asked to cite published versions of papers instead of preprint versions when possible.

Papers that have been or will be under consideration for other venues at the same time must be declared at submission time. If a paper is accepted for publication at LREC-COLING 2024, it must be immediately withdrawn from other venues. If a paper under review at LREC-COLING 2024 is accepted elsewhere and authors intend to proceed there, the LREC-COLING 2024 committee must be notified immediately.

ETHICS STATEMENT

We encourage all authors submitting to LREC-COLING 2024 to include an explicit ethics statement on the broader impact of their work, or other ethical considerations after the conclusion but before the references. The ethics statement will not count toward the page limit (8 pages for long, 4 pages for short papers).

PRESENTATION REQUIREMENT

All papers accepted to the main conference track must be presented at the conference to appear in the proceedings, and at least one author must register for LREC-COLING2024.

All papers accepted to the main conference will be required to submit a presentation video. The conference will be hybrid, with an emphasis on encouraging interaction between the online and in-person modalities, and thus presentations can be either on-site or virtual.

 

 

Back  Top

3-3-42(2024-07-22) 13th International Conference on Voice Physiology and Biomechanics, Erlangen, Germany

13th International Conference

on Voice Physiology and Biomechanics

Erlangen, Germany 22nd-26th of July 2024

 

 

we cordially invite you to participate in the 13th International Conference on Voice Physiology and Biomechanics, July 22nd – 26th of 2024!

After the successful hosting in 2012, we are pleased to welcome you back in Erlangen, Germany! There will be two days of workshops prior to the three days of conference and several social events in the beautiful Nuremberg Metropolitan Region.

The workshops (July 22nd-23rd) and the conference (July 24th-26th) will focus on voice physiology and biomechanics including computational, numerical and experimental modelingmachine learningtissue engineeringlaryngeal pathologies and many more. Abstract submission and registration will be open from November 1st, 2023.

We are looking forward to your contributions and to seeing you in Erlangen, July 2024!

Back  Top

3-3-43(2024-09-09) Cf Labs Proposals @CLEF 2024, Grenoble, France

Call for Labs Proposals @CLEF 2024

At its 25th edition, the Conference and Labs of the Evaluation Forum (CLEF) is a continuation of the very successful series of evaluation campaigns of the Cross Language Evaluation Forum (CLEF) which ran between 2000 and 2009, and established a framework of systematic evaluation of information access systems, primarily through experimentation on shared tasks. As a leading annual international conference, CLEF uniquely combines evaluation laboratories and workshops with research presentations, panels, posters and demo sessions. In 2024, CLEF takes place in September,  9-12 at the University of Grenoble Alpes, France.

Researchers and practitioners from all areas of information access and related communities are invited to submit proposals for running evaluation labs as part of CLEF 2024. Proposals will be reviewed by a lab selection committee, composed of researchers with extensive experience in evaluating information retrieval and extraction systems. Organisers of selected proposals will be invited to include their lab in the CLEF 2024 labs programme, possibly subject to suggested modifications to their proposal to better suit the CLEF lab workflow or timeline.

Background

The CLEF Initiative (http://www.clef-initiative.eu/) is a self-organised body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual information in different modalities - including text and multimedia - with various levels of structure. CLEF promotes research and development by providing an infrastructure for:

  1. independent evaluation of information access systems;

  2. investigation of the use of unstructured, semi-structured, highly-structured, and semantically enriched data in information access; 

  3. creation of reusable test collections for benchmarking; 

  4. exploration of new evaluation methodologies and innovative ways of using experimental data; 

  5. discussion of results, comparison of approaches, exchange of ideas, and transfer of knowledge.

Scope of CLEF Labs

We invite submission of proposals for two types of labs:

  1. “Campaign-style” Evaluation Labs for specific information access problems (during the twelve months period preceding the conference), similar in nature to the traditional CLEF campaign “tracks”. Topics covered by campaign-style labs can be inspired by any information access-related domain or task.

  2. Labs that follow a more classical “workshop” pattern, exploring evaluation methodology, metrics, processes, etc. in information access and closely related fields, such as natural language processing, machine translation, and human-computer interaction.

We highly recommend organisers new to the CLEF format of shared task evaluation campaigns to first consider organising a lab workshop to discuss the format of their proposed task, the problem space and practicalities of the shared task. The CLEF 2024 programme will reserve about half of the conference schedule for lab sessions. During the conference, the lab organisers will present their overall results in overview presentations during the plenary scientific paper sessions to give non-participants insights into where the research frontiers are moving. During the conference, lab organisers are expected to organise separate sessions for their lab with ample time for general discussion and engagement with all participants - not just those presenting campaign results and papers. Organisers should plan time in their sessions for activities such as panels, demos, poster sessions, etc. as appropriate. CLEF is always interested in receiving and facilitating innovative lab proposals. 

Potential task proposers unsure of the suitability of their task proposal or its format for inclusion at CLEF are encouraged to contact the CLEF 2024 Lab Organizing Committee Chairs to discuss its suitability or design at an early stage.

Proposal Submission

Lab proposals must provide sufficient information to judge the relevance, timeliness, scientific quality, benefits for the research community, and the competence of the proposers to coordinate the lab. Each lab proposal should identify one or more organisers as responsible for ensuring the timely execution of the lab. Proposals should be 3 to 4 pages long and should provide the following information:

  1. Title of the proposed lab.
     

  2. A brief description of the lab topic and goals, its relevance to CLEF and the significance for the field.
     

  3. A brief and clear statement on usage scenarios and domain to which the activity is intended to contribute, including the evaluation setup and metrics.
     

  4. Details on the lab organiser(s), including identifying the task chair(s) responsible for ensuring the running of the task. This should include details of any previous involvement in organising or participating in evaluation tasks at CLEF or similar campaigns.
     

  5. The planned format of the lab, i.e., campaign-style (“track”) or workshop.
     

  6. Is the lab a continuation of an activity from previous year(s) or a new activity?  

  1. For activities continued from previous year(s): Statistics from previous years (number of participants/runs for each task), a clear statement on why another edition is needed, an explicit listing of the changes proposed, and a discussion of lessons to be learned or insights to be made.

  2. For new activities: A statement on why a new evaluation campaign is needed and how the community would benefit from the activity.
     

  1. Details of the expected target audience, i.e., who do you expect to participate in the task(s), and how do you propose to reach them.
     

  2. Brief details of tasks to be carried out in the lab. The proposal should clearly motivate the need for each of the proposed tasks and provide evidence of its capability of attracting enough participation. The dataset which will be adopted by the Lab needs to be described and motivated in the perspective of the goals of the Labs; also indications on how the dataset will be shared are useful. It is fine for a lab to have a single task, but labs often contain multiple closely related tasks, needing a strong motivation for more than 3 tasks, to avoid useless fragmentation.
     

  3. Expected length of the lab session at the conference: half-day, one day, two days. This should include high-level details of planned structure of the session, e.g. participant presentations, invited speaker(s), panels, etc., to justify the requested session length.
     

  4. Arrangements for the organisation of the lab campaign: who will be responsible for activities within the task; how will data be acquired or created, what tools or methods will be used, e.g., how will necessary queries be created or relevance assessment carried out; any other information which is relevant to the conduct of your lab.
     

  5. If the lab proposes to set up a steering committee to oversee and advise its activities, include names, addresses, and homepage links of people you propose to be involved.

Lab proposals must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Proposals” track.

Reviewing Process

Each submitted proposal will be reviewed by the CLEF 2024 Lab Organizing Committee. The acceptance decision will be sent by email to the responsible organiser by 28 July 2023. The final length of the lab session at the conference will be determined based on the overall organisation of the conference and the number of participant submissions received by a lab.

 

Advertising Labs at CLEF 2023 and ECIR 2024

Organisers of accepted labs are expected to advertise their labs at both CLEF 2023 (18-21 September 2023, Thessaloniki, Greece) and ECIR 2024 (24-28 March 2024, Glasgow, Scotland). So, at least one lab representative should attend these events.

Advertising at CLEF 2023 will consist of displaying a poster describing the new lab, running a break-out session to discuss the lab with prospective participants, and advertising/announcing it during the closing session.

Advertising at ECIR 2024 will consist of submitting a lab description to be included in ECIR 2024 proceedings (11 October 2023) and advertising the lab in a booster session during ECIR 2024.

Mentorship Program for Lab Proposals from newcomers

CLEF 2019 introduced a mentorship program to support the preparation of lab proposals for newcomers to CLEF. The program will be continued at CLEF 2024 and we encourage newcomers to refer to Friedberg et al. (2015) for initial guidance on preparing their proposal:

Friedberg I, Wass MN, Mooney SD, Radivojac P. Ten simple rules for a community computational challenge. PLoS Comput Biol. 2015 Apr 23;11(4):e1004150.

The CLEF newcomers mentoring program offers help, guidance, and feedback on the writing of your draft lab proposal by assigning a mentor to you, who help you in preparing and maturing the lab proposal for submission. If your lab proposal falls into the scope of an already existing CLEF lab, the mentor will help you to get in touch with those lab organisers and team up forces.

Lab proposals for mentorship must be submitted at the following address:

https://easychair.org/conferences/?conf=clef2024

choosing the “CLEF 2024 Lab Mentorship” track.

Important Dates

  • 29 May 2023: Requests for mentorship submission (only newcomers)

  • 29 May 2023 - 16 June 2023: Mentorship period

  • 7 July 2023: Lab proposals submission (newcomers and veterans)

  • 28 July 2023: Notification of lab acceptance

  • 18-21 Sep 2023: Advertising Accepted Labs at CLEF 2023, Thessaloniki, Greece

  • 11 October 2023: Submission of short lab description for ECIR 2024

  • 13 November 2023: Lab registration opens

  • 24-28 March 2024: Advertising labs at ECIR 2024, Glasgow, UK

CLEF 2024 Lab Chairs

  • Petra Galuscakova, University of Stavanger, Norway

  • Alba García Seco de Herrera, University of Essex, UK

CLEF 2024 Lab Mentorship Chair

  • Liana Ermakova, Université de Bretagne Occidentale, France

  • Florina Piroi, TU Wien, Austria

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA