ISCA - International Speech
Communication Association


ISCApad Archive  »  2023  »  ISCApad #298  »  Events  »  Other Events

ISCApad #298

Friday, April 07, 2023 by Chris Wellekens

3-3 Other Events
3-3-1(2023-04-24) Special track SATASK: Task-based evaluation of speech/audio interfaces@ACHI23 in Venice, Italy

Call for Contributions

Special track
SATASK: Task-based evaluation of speech/audio interfaces

Chair
Prof. Gerald Penn, University of Toronto, Canada
gpenn@cs.toronto.edu

along with

ACHI 2023, The Sixteenth International Conference on Advances in
Computer-Human Interactions
https://www.iaria.org/conferences2023/ACHI23.html

April 24 - 28, 2023
Venice, Italy

Long gone is the hegemony of the Word Error Rate (WER).  After years
of painstaking research by the Human-Computer Interaction (HCI)
community documenting the enormous variance between WER and the actual
performance of speech recognition systems within the context of
applications other than transcription, speech and acoustic signal
processing engineers are ready to listen.

There have been a few proposals for benchmarks already, such as SLUE
and ASR-GLUE, both coined from the wildly popular General Language
Understanding Evaluation (GLUE) for text processing, but to date the
speech analogues of GLUE have been much humbler in both their breadth
and uptake.  At the same time, the same causes for concern exist as
for GLUE: is sentiment analysis actually a task?  What constitutes a
task in related HCI work would be something people want to perform and
could express (dis)satisfaction with, e.g., voice-mail interaction or
meeting summarization.  But this kind of vertical definition could
easily result in a proliferation of datasets and disagreements over
approaches.

The purpose of this track is to encourage proposals and proofs of
concept by both engineering and HCI researchers to converge on an
appropriate benchmark.

Topics include, but not limited to:

'       Individual task proposals
'       General criteria/desiderata for individual task proposals
'       Frameworks and codebases to support evaluation benchmarks
'       Case studies with existing benchmarks or ad hoc groups of tasks
'       Proposals of new intrinsic measures
'       Systemic considerations governing balance across tasks/measures
'       Automated proxies for human-subject evaluation

Contribution Types
-       Regular papers [in the proceedings, digital library]
-       Short papers (work in progress) [in the proceedings, digital library]
-       Posters: two pages [in the proceedings, digital library]
-       Posters: slide only [slide-deck posted on www.iaria.org]
-       Presentations: slide only [slide-deck posted on www.iaria.org]
-       Demos: two pages [posted on www.iaria.org]

Important Datelines
Inform the Chair: As soon as you decide to contribute
Submission: March 9 (earlier, better)
Notification: March 27
Registration: April 6
Camera-ready: April 6 Note: The submission deadline is somewhat flexible

Paper Format
- 6 pages on US-letter, plus up to 4 extra pages at additional cost
For further details, see: http://www.iaria.org/format.html

- Before submission, please check and comply with the editorial rules:
http://www.iaria.org/editorialrules.html

Publications
- Extended versions of selected papers will be published in IARIA
Journals: http://www.iariajournals.org
- Print proceedings will be available via Curran Associates, Inc.:
http://www.proceedings.com/9769.html
- Articles will be archived in the free access ThinkMind Digital Library:
http://www.thinkmind.org

Paper Submission
https://www.iariasubmit.org/conferences/submit/newcontribution.php?event=ACHI+2023+Special
Please select Track Preference as SATASK

Registration
- Each accepted paper needs at least one full registration, before the
camera-ready manuscript can be included in the proceedings.
- Registration fees are available at http://www.iaria.org/registration.html

Contact
Chair: Gerald Penn, gpenn@cs.toronto.edu
Logistics: steve@iaria.org

https://www.iariasubmit.org/conferences/submit/newcontribution.php?event=ACHI+2023+Special



Note: Onsite and Online Options

In order to accommodate a large number of situations, we are offering
the option for either physical presence or virtual participation. We
would be delighted if all authors manage to attend in person but are
aware that special circumstances are best handled by having flexible
options.


Back  Top

3-3-2(2023-05-02) Fourth workshop on Resources for African Indigenous Language (RAIL)
First call for papers

Fourth workshop on Resources for African Indigenous Language (RAIL)
https://bit.ly/rail2023


The 4rd RAIL (Resources for African Indigenous Languages) workshop will be co-located with EACL 2023 in Dubrovnik, Croatia. The Resources for African Indigenous Languages (RAIL) workshop is an interdisciplinary
platform for researchers working on resources (data collections, tools, etc.) specifically targeted towards African indigenous languages. In particular, it aims to create the conditions for the emergence of a
scientific community of practice that focuses on data, as well as computational linguistic tools specifically designed for or applied to indigenous languages found in Africa.

Previous workshops showed that the presented problems (and solutions) are not only applicable to African languages. Many issues are also  relevant to other low-resource languages, such as different scripts and
properties like tone. As such, these languages share similar challenges. This allows for researchers working on these languages with such properties (including non-African languages) to learn from each
other, especially on issues pertaining to language resource development.

The RAIL workshop has several aims. First, it brings together researchers working on African indigenous languages, forming a community of practice for people working on indigenous languages.
Second, the workshop aims to reveal currently unknown or unpublished existing resources (corpora, NLP tools, and applications), resulting in a better overview of the current state-of-the-art, and also allows for
discussions on novel, desired resources for future research in this area. Third, it enhances sharing of knowledge on the development of low-resource languages. Finally, it enables discussions on how to
improve the quality as well as availability of the resources.

The workshop has “Impact of impairments on language resources” as its theme, but submissions on any topic related to properties of African indigenous languages (including non-African languages) may be accepted.
Suggested topics include (but are not limited to) the following:
  • Digital representations of linguistic structures
  • Descriptions of corpora or other data sets of African indigenous languages
  • Building resources for (under resourced) African indigenous languages
  • Developing and using African indigenous languages in the digital age
  • Effectiveness of digital technologies for the development of African indigenous languages
  • Revealing unknown or unpublished existing resources for African indigenous languages
  • Developing desired resources for African indigenous languages
  • Improving quality, availability and accessibility of African indigenous language resources

Submission requirements:
We invite papers on original, unpublished work related to the topics of the workshop. Submissions, presenting completed work, may consist of up to eight (8) pages of content plus additional pages of references. The
final camera-ready version of accepted long papers are allowed one additional page of content (so up to 9 pages) so that reviewers’ feedback can be incorporated.
Submissions need to use the EACL stylesheets. These can be found at https://2023.eacl.org/calls/styles.
Submission is electronic in PDF through the START system (link will be provided once available).
Reviewing is double-blind, so make sure to anonymize your submission (e.g., do not provide author names, affiliations, project names, etc.)
Limit the amount of self citations (anonymized citations should not be used). Accepted papers will be published in the ACL workshop proceedings.
 
Important dates:
  • Submission deadline 13 February 2023
  • Date of notification 13 March 2023
  • Camera ready deadline 27 March 2023
  • RAIL workshop 2 or 6 May 2023

Organising Committee
  • Rooweither Mabuya, South African Centre for Digital Language Resources (SADiLaR), South Africa
  • Don Mthobela, Cam Foundation
  • Mmasibidi Setaka, South African Centre for Digital Language Resources (SADiLaR), South Africa
  • Menno van Zaanen, South African Centre for Digital Language Resources (SADiLaR), South Africa
Back  Top

3-3-3(2023-05-26) Celebration Professor Didier Demolin, LPP


Dear colleague,


On behalf of the Laboratoire de Phonétique et Phonologie (LPP) we are honored to
invite you to attend the event celebrating the scientific career of Professor Didier
Demolin entitled 'Phonétique expérimentale : un voyage interdisciplinaire à travers le temps et
l’espace.'
The event is a full-day program being curated by the LPP, with an audience made of
scholars that have partnered with Didier during his career. The event is scheduled for
May 26, 2023from 09:45 am to 19:00 pm in Paris, France.


The program includes lectures from six distinguished scholars:
Jacqueline Vaissière (Université Sorbonne Nouvelle, France)
Jean-Marie Hombert (CNRS/Université Lumière Lyon 2, France)
John Kingston (Universityof Massachusetts, USA)
Jody Kreiman (University of California, Los Angeles, USA)
Hans Van de Velde (Fryske Akademy, The Netherlands)
Sergio Hassid (Hôpital Universitaire de Bruxelles, Belgium)


The full scientific and social program as well as information on the venue are here:
https://lpp.in2p3.fr/colloque-phonetique-experimentale-voyage-interdisciplinaire/
Please let us know by April 15, 2023, whether you would be interested in taking part in
the event in person or via videoconferencing. In the case of a positive response please
add your name, affiliation and choice of attendance here:
https://forms.gle/5jxz8GpabEFpdJCp7


Thank you very much! Please let us know if you have any questions.
We very much look forward to seeing you at the event.

Best wishes,


The organizing committee
Rosario Signorello, Lise Crevier-Buchman, Claire Pillot-Loiseau, Nicolas Audibert,
Leonardo Lancia, Alexis Dehais-Underdown, Clara Ponchard, Cecile Fougeron, and
Cedric Gendrot

Back  Top

3-3-4(2023-05-26) HISPhonCog 2023: Hanyang International Symposium on Phonetics and Cognitive Sciences of Language 2023, Hanyang University, Seoul, South Korea

Dear colleagues and prospective participants of HISPhonCog 2023

(my apologies for cross listings). 

We are very pleased to inform you that we will resume our annual HISPhonCog conference in 2023 after such a long pause due to Covid 19.
We sincerely hope that we will be able to meet many of you in person in Seoul in May 2023.
Best wishes,
Taehong Cho
Chair

 

HISPhonCog 2023: Hanyang International Symposium on Phonetics and Cognitive Sciences of Language 2023

Hanyang University, Seoul, South Korea, May 26-27, 2023

https://site.hanyang.ac.kr/web/hisphoncog/about-hisphoncog/2023

HIPCS (Hanyang Institute for Phonetics and Cognitive Sciences of Language) at Hanyang University, together with Department of English Language and Literature, holds its 3rd annual international symposium on current issues on phonetics and cognitive sciences of language (HISPhonCog) 2023 on 26-27 May, 2023.

Theme for HISPhonCog 2023

Linguistic and cognitive functions of fine phonetic detail underlying sound systems and/or sound change

We have witnessed over past decades that the severance between phonetics and phonology has been steadily eroding along with the awareness of the importance of scalar and gradient aspects of speech in understanding the linguistic sound system and sound change. In particular, non-contrastive phonetic events (either at the subphonemic level or at the suprasegmental level of micro-prosody), which had traditionally been understood to be beyond the speaker’s control (as low-level automatic physiological phenomena), have been reinterpreted as part of the grammar. They have turned out to be either systematically linked with phonological contrasts in the segmental or intonational phonology and higher-order linguistic structures (e.g., prosodic structure, morphosyntactic structure, information structure) or governed by language-specific phonetic rules that make the seemingly cross-linguistically similar phonetic processes distinctive, both of which may in turn serve as driving forces for sound change. Furthermore, we have enjoyed seeing that the investigation of linguistic roles of fine phonetic detail provides insights into phonetic underpinnings of other speech variation phenomena such as sociolinguistically-driven speech variation and effects of native-language experience on production and perception of unfamiliar languages or L2. Most remarkably, such phonetic underpinnings are not purely segmental in nature, but they are suprasegmental or systematically related to prosodic structure and the intonational grammar of the language.  

We invite submissions which provide some empirical (experimental) evidence for exploring any issues related to the theme of the symposium. We also wish to have a special session on Articulatory Phonology and speech dynamics bearing on the issue of how gradient and categorical aspects of human speech may be combined to serve as a cognitive linguistic unit. We will also consider submissions that deal with other general issues in speech production and perception in L1 and L2. We particularly welcome submissions from the neuro-cognitive perspectives or from the phonetics-prosody interplay.  

Invited speakers

  • Adam Albright (MIT)
  • Lisa Davidson (New York University)
  • John Kingston (University of Massachusetts, Amherst)
  • Marianne Pouplier (University Munich)
  • Donca Steriade (MIT)
  • Andrew Wedel (University of Arizona)
  • Douglas Whalen (CUNY and Haskins Laboratories)
  • Alan Yu (University of Chicago)

A possible Special Issue to be published in a journal

  • Oral presentations (including invited talks) and a limited number of selected posters (related to the themes of the conference under the rubric of linguistic/cognitive functions of fine phonetic detail) will be invited to submit a full manuscript to be considered further for a possible inclusion in a Special Issue in a peer-reviewed international journal.
  • The actual journal has not been selected but we are considering one of the followings journals subject to final approval from a targeted journal: Journal of Phonetics, Phonetica, Journal of International Phonetic Association, Laboratory Phonology, Language and Speech, Linguistic Vanguard, The Linguistic Review, etc. 
  • Once we know the journal, we will announce it on the HISPhonCog 2023 webpage.
  • (tentative) Guest editors: T. Cho, S. Kim & H. MItterer
  • If you wish to have your paper considered for the special issue, regardless of whether your paper is selected for oral presentation or not, please indicate your intention when you submit an abstract through Easy Chair. 
  • We will also consider papers on the theme, even if they are not to be presented at the conference. In such a case, please send a two-page abstract (including figures and references within the two page limit) to Taehong Cho at tcho@hanyang.ac.kr by February 10. 
  • Deadline of submission of invited/selected papers (for a special issue): July 30, 2023. (This deadline will be strictly enforced.)
  • Note that each selected paper will undergo standard editorial/review processes which may eventually lead to its exclusion (rejection). 

Support for international participants (possible free accommodation)

  • As before, we will do our best to provide free local hotel accommodation (one room for up to 3 nights per presentation) for international presenters affiliated with a foreign institute/university, travelling from abroad.
  • Please note, however, that the local hotel we had a contract with before was closed due to Covid 19. So we may not be able to make an arrangement for free accommodation. If we don't find a solution, we will have to provide a small amount of partial accommodation subsidy with priority to be given to student presenters. The details will be sent to qualified individuals along with an acceptance letter, depending on the final budget approval.  

Free registration fees

  • We are very pleased to inform you that we will be able to make registration free as before.
  • Free registion will include free banquet, free munches for breakfast, free refreshments and a free conference handbook
  • Attendees will have to pay (optionally) for lunches (10 USD or equivalent in KRW for each lunch) at the time of arrival. (The detail will be provided along with registration.) 
  • Pre-registration should be made by no later than April 10, 2023 to be guaranteed for possible (partial) accommodation support (for international presenters) and free registration (for all foreign and domestic participants and audience).
  • A pre-registration form that arrives a few days after April 10 may still be considered for free registration, depending on the budget and availability. Please contact us at hanyang.hipcs@gmail.com if you miss the deadline by a few days but still would like to register in advance.
  • On-site registration will be possible for small fees, but with no guarantee for lunches and banquet admission.
  • For further information about how to register, please check the website later.

Abstract submission instruction

Call for Satellite Workshop 

  • We set aside May 25 (Thursday), 2023 (the day before the main conference) for one or two possible satellite workshops. 
  • We will provide rooms and light refreshments free of charge with support of our onsite personnel. 
  • If you are interested, please contact Taehong Cho directly at tcho@hanyang.ac.kr. The proposal will be welcome until the two slots are filled. 

Timeline

  • Deadline of submission of a two-page long abstract: February 10, 2023
  • Notification of Acceptance: No later than March 10, 2023
  • Free Registration with free accommodation: No later than April 10, 2023
  • Satellite Workshop (if organized): May 25, 2023
  • Symposium dates: May 26-27, 2023 
  • (Submission of invited papers to a special issue: July 30, 2023)

Local Organizing Institute and Committee

Organizing Bodies of HISPhonCog:

  • HIPCS (the Hanyang Institute for Phonetics and Cognitive Sciences of Language)
  • CRC for Articulatory DB and Cognitive Sciences
  • Department of English Language and Literature, Hanyang University

Organizing Committee:

  • Taehong Cho (Chair, HIPCS, Hanyang University)
  • Sahyang Kim (Hongik University & HIPCS)
  • Say Young Kim (HIPCS, Hanyang University, Seoul)
  • Suyeon Im (HIPCS, Hanyang University, Seoul)

  Contact

Back  Top

3-3-5(2023-06-04) Cf Show-and-Tell Demos @ICASSP 2023, Rhodes Island, Greece

Call for Show-and-Tell Demos! 

Submit a Proposal by 7 March 2023.

We are soliciting proposals for Show-and-Tell Demos that will be held in person during the 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). ICASSP 2023 will take place from 4-10 June 2023 in Rhodes Island, Greece. 

 

The Show-and-Tell Demos event includes demonstrations of innovations done by research and engineering groups in industry, academia, and government entities. The demonstrations can be related to any of the technical areas as defined in the Call for Papers of ICASSP 2023. The theme of the conference this year is 'Signal Processing in the AI Era'.

     

Show-and-Tell Demo Proposals

 

The proposal must clearly explain how the proposed demonstration is novel and innovative, and why it should appeal to the ICASSP audience. Show-and-Tell demonstrations should have an interactive component, which goes beyond demonstrating simple simulated graphs or presenting slides or a video on a computer. Proposals demonstrating open-source research software/hardware are also within the scope of the session. 

 

The deadline for submission of a Show-and-Tell Demo proposal is 7 March 2023, and the accepted proposals will be announced by 31 March 2023. Proposals should be submitted online via this submission link.

     

ICASSP 2023 is a flagship event of the IEEE Signal Processing Society, a global network of signal processing and data science professionals. A membership to IEEE SPS connects you with more than 18,000 researchers, academics, industry practitioners, and students advancing and disseminating the latest breakthroughs and technology. By joining, you’ll receive significant savings on registration to future events, including ICASSP 2023, as well as access to highly-ranked journals, continuing education materials, and a robust technical community. Learn more about how you can save and grow with us! 

Back  Top

3-3-6(2023-06-04) CfP ICASSP 2023, Rhodes Island, Greece

 

 

 

Announcing the ICASSP 2023 Call for Papers! 

The Call for Papers for ICASSP 2023 is now open! The 48th IEEE International Conference on Acoustics, Speech, & Signal Processing (ICASSP) will be held from 4-9 June 2023 in Rhodes Island, Greece. 
 
The flagship conference of the IEEE Signal Processing Society will offer a comprehensive technical program presenting all the latest developments in research and technology for signal processing and its applications. Featuring world-class oral and poster sessions, keynotes, plenaries and perspective talks, exhibitions, demonstrations, tutorials, short courses, and satellite workshops, it is expected to attract leading researchers and global industry figures, providing a great networking opportunity. Moreover, exceptional papers and contributors will be selected and recognized by ICASSP.

Technical Scope

 

We invite submissions of original unpublished technical papers on topics including but not limited to:

  • Applied Signal Processing Systems
  • Audio & Acoustic Signal Processing
  • Biomedical Imaging & Signal Processing
  • Compressive Sensing, Sparse Modeling
  • Computational Imaging
  • Computer Vision 
  • Deep Learning/Machine Learning for Signal Processing 
  • Image, Video & Multidimensional Signal Processing 
  • Industrial Signal Processing 
  • Information Forensics & Security 
  • Internet of Things
  • Multimedia Signal Processing
  • Quantum Signal Processing
  • Remote Sensing & Signal Processing
  • Sensor Array & Mulltichanel SP
  • Signal Processing for Big Data
  • Signal Processing for Communication
  • Signal Processing for Cyber Security
  • Signal Processing for Education
  • Signal Processing for Robotics
  • Signal Processing Over Graphs
  • Signal Processing Theory & Methods 
  • Speech & Language Processing

SP Society Journal Paper Presentations

Authors of papers published or accepted in IEEE SPS journals may present their work at ICASSP 2023 at appropriate tracks. These papers will neither be reviewed nor included in the proceedings. In addition, the IEEE Open Journal of Signal Processing (OJSP) will provide a special track for longer submissions with the same processing timeline as ICASSP. Accepted papers will be published in OJSP and presented in the conference but will not be included in the conference proceedings.

 

Open Preview

Conference proceedings will be available in IEEE Xplore, free of charge, to all customers, 30 days prior to the conference start date, through the conference end date.

 

Important Dates

  • Paper Submission Deadline: 19 October 2022
  • Paper Acceptance Notification: 8 February 2023 
  • SPS Journal Papers/Letters Deadline: 8 February 2023
  • Camera Ready Paper Deadline: 6 March 2023 
  • Author Registration Deadline: 20 March 2023 
  • In Person re-opening: deadline 10 April 2023
  • Open Preview Starts: 5 May 2023
Back  Top

3-3-7(2023-06-04) CfP Student competition at ICASSP 2023, Rhodes, Greece
 
Call for Proposals
Student Competitions at IEEE ICASSP 2023
The IEEE Signal Processing Society is calling those interested in organizing one of the student competitions that will be held at ICASSP 2023! The SP Cup gives SPS Students the opportunity to solve real-life problems using signal processing or video and image processing methods. Rounds of open competition are held before three final teams are selected to present their work and compete for the US$5,000 grand prize at ICASSP 2023! See the full SP Cup Call for Proposals.
 
The 5-MICC is the Society’s new video contest in which teams of students create five-minute videos that highlight and generate excitement about signal processing concepts. The final three teams’ videos will be selected and featured on the ICASSP website, where the ICASSP and signal processing community can vote for their favorites! Those teams will be invited to attend ICASSP for the final phase of the competition and US$5,000 grand prize. See the full 5-MICC Call for Proposals.
 
If you are interested in submitting a proposal for the SP Cup or the 5-Minute Video Clip Contest being held at ICASSP 2023, please submit your proposal for endorsement to the SPS Technical Committee (TC) that best fits your proposal by 6 January 2023 for the SP Cup and 3 January 2023 for the 5-MICC. Your proposal must be endorsed by one of the TCs. You can find the Society’s TCs located on the Technical Committees page on the SPS website. The endorsed proposals will then be submitted by the TC Chairs to SP-SC-STUDENTSERVICES@LISTSERV.IEEE.ORG by 13 January 2023 for the SP Cup and 7 January 2023 for the 5-MICC.

If you have questions, you can reach out directly to Angshul Majumdar, SPS Student Services Director, and Jaqueline Rash, SPS Membership Program and Events Administrator, or the SSC alias.
Back  Top

3-3-8(2023-06-04) CfSatellite Workshops ICASSP 2023, Rhodes Island, Greece

 

 Call for Satellite Workshops at ICASSP 2023

The organizing committee of ICASSP 2023 invites proposals for Satellite Workshops, aiming to inaugurate such tradition with the goals of enriching the conference program, attracting a wider audience, and enhancing inclusivity for students and professionals.
 
The ICASSP Satellite Workshops will be half or full-day events and will take place the day before or after the main conference technical program at the conference venue. The workshops may include a mix of regular papers, invited presentations, keynotes, and panels, encouraging the participation of attendees in active discussions.
 
Submit your proposals by 9 November 2022. 

Workshop Logistics

 

Organizers of ICASSP 2023 Satellite Workshops will be responsible for the workshop scientific planning and promotion, including the setup of their external website (this will be linked from the main ICASSP 2023 site but not hosted there), running of the paper reviewing process, undertaking all communication with the submitted paper authors, creating and announcing the event schedule, abiding by the Important Dates listed below, and seamlessly communicating with the Workshop Chairs.

 

Please note that specifically for workshops that will appear at IEEE Xplore, the paper submission and reviewing process will be conducted through the ICASSP 2023 paper management system (Microsoft CMT).

 

The ICASSP 2023 organizers will handle workshop registration, allocation of facilities, and distribution of the workshop papers in electronic format. Workshop attendance will be free-of-charge for the main conference registrants, while a reduced registration fee will be charged to workshop-only attendees.

Important Dates

  • Workshop Proposal Submission Deadline: 9 November 2022

  • Workshop Proposal Acceptance Notification: 23 November 2022

  • Workshop Website Online: 7 December 2022

  • Workshop Paper Submission Deadline: 15 February 2023

  • Workshop Paper Acceptance Notification: 14 April 2023

  • Workshop Camera Ready Paper Deadline: 28 April 2023

Back  Top

3-3-9(2023-06-05) Atelier sur l'analyse et la recherche de textes scientifiques (ARTS) , Paris
 
==================
Appel à soumission
==================
 
Date limite de soumission : 31 mars 2023 (dans une semaine)
 
Atelier sur l'analyse et la recherche de textes scientifiques (ARTS)
 
Dans le cadre de la conférence jointe CORIA-TALN 2023 à Paris, France.
 
 
L'atelier sur l'Analyse et la Recherche de Textes Scientifiques (ARTS) se veut
un lieu de rencontre et d'échange pour les chercheurs en Recherche d'Information
(RI) et en Traitement Automatique des Langues (TAL) qui s'intéressent aux textes
scientifiques. Nous sollicitons des communications pouvant porter sur les thèmes, 
incluant, de façon non limitative :
 
- Recherche et recommandation d'articles scientifiques
- Extraction d'information dans les textes scientifiques, tableaux, figures, bibliographie
- Analyse de documents scientifiques
- Reconnaissance d'entités nommées dans les textes scientifiques
- Résumé automatique de textes scientifiques
- Analyse et recommandation de citations
- Détection de plagiat
- Détection et vérification d'affirmations scientifiques 
- Analyse argumentative de textes scientifiques
- Visualisation des connaissances scientifiques
- Traduction de textes scientifiques
- Jeux de données composés des textes scientifiques
- Bibliométrie, scientométrie
 
=================
Dates importantes
=================
 
Date limite de soumission : 31 mars 2023
Notification aux auteurs : 21 avril 2023
Versions définitives : 5 mai 2023
Atelier : 5 juin 2023
 
=======================
Soumission des articles
=======================
 
Les articles seront rédigés en français pour les francophones, en anglais pour ceux
qui ne maîtrisent pas le français. Ils devront suivre le format 'mini' de CORIA-TALN
2023 (4 pages + références).
 
 
Site web de soumission des articles : https://arts2023.sciencesconf.org/
 
=======
Contact
=======
 
 
=====================
Comité d'organisation
=====================
 
Florian Boudin, Béatrice Daille, Richard Dufour, Léane Jourdan, Maël Houbre,
Oumaima El Khettari (TALN-LS2N, Nantes Université)
Back  Top

3-3-10(2023-06-05) Ateliers @ CORIA-TALN, Paris

Dans le cadre des conférences conjointes CORIA-TALN 2023 organisées à Paris, nous sollicitons des propositions d'ateliers. Les ateliers doivent porter sur une thématique particulière de traitement automatique des langues ou de recherche d’information afin de rassembler quelques exposés plus ciblés que lors des conférences plénières.

Chaque atelier a son propre président et son propre comité de programme. Le responsable de l'atelier est chargé de la communication sur celui-ci, de l'appel à soumissions et de la coordination de son comité de programme.

Les organisateurs de CORIA-TALN 2023 s'occuperont de la logistique (e.g. gestion des salles, pauses café et diffusion des articles).

Les ateliers auront lieu en parallèle durant une journée ou une demi-journée (2 à 4 sessions de 1h30) le lundi 5 juin 2023.

Dates importantes
- Date limite de soumission des propositions d'atelier : 6 février 2023
- Réponse du comité de programme : 13 février 2023

Modalités de proposition
Les propositions d'ateliers (1 à 2 pages A4 en format PDF) comprendront :
- le nom et l'acronyme de l’atelier
- une description synthétique du thème de l'atelier
- le comité d'organisation
- le comité scientifique provisoire ou pressenti
- l'adresse du site web
- la durée souhaitée pour la réalisation de l'atelier (1 journée ou 1/2 journée) et l'audience potentielle

Les propositions d'ateliers devront être envoyées sous forme électronique à adrian.chifu@univ-amu.fr et à cyril.grouin@limsi.fr avec pour entête de courriel : [Atelier CORIA-TALN 2023].

Modalités de sélection
Les propositions d'atelier seront examinées par des membres des comités de programme de CORIA, TALN, par l'ARIA et le CPERM de l'ATALA. Les critères suivants seront considérés pour acceptation :
- l'adéquation aux thèmes de l'une ou l'autre des conférences
- l'originalité de la proposition

Format
Les conférences auront lieu en français (ou en anglais pour les non-francophones). Les articles soumis devront suivre le format de CORIA-TALN 2023 (nombre de pages à la discrétion du comité de programme de l'atelier). La soumission des versions finales devra suivre le calendrier de la conférence principale.

Contact : adrian.chifu@univ-amu.fr et cyril.grouin@limsi.fr

Back  Top

3-3-11(2023-06-05) Défi Fouille de Texte DEFT 2023, @TALN 2023, Paris, France
*Défi Fouille de Texte DEFT 2023*
https://deft2023.univ-avignon.fr/
 
Créé en 2005 à l'image des campagnes TREC et MUC, le DÉfi Fouille de Textes est une campagne d'évaluation francophone qui propose chaque année de confronter les méthodes de plusieurs équipes de recherche sur une thématique régulièrement renouvelée.
 
Cette nouvelle édition du défi portera sur la mise en place d?approches permettant de répondre automatiquement à des questionnaires à choix multiples issus d?annales d?examens de pharmacie. Le corpus utilisé, FrenchMedMCQA, se compose de questions fermées en français provenant d'annales d'examens de pharmacie. Chaque question contient : un identifiant, la question, cinq options et l'ensemble des réponse(s) correcte(s).
 
Les tâches proposées lors de ce défi sont :
 
Tâche principale : identifier automatiquement l'ensemble de réponses correctes parmi les cinq proposées pour une question donnée.
- Entrée : un ensemble de questions fermées (plusieurs formats proposés : HuggingFace, JSON, TSV)
- Sortie : la liste des réponses correctes
- Evaluation : Exact Match Ratio (taux de réponses parfaitement juste) et Hamming Score (taux de réponses juste parmi l'ensemble des réponses et références)
 
Tâche annexe : identifier le nombre de réponses (entre 1 et 5) supposément justes pour une question donnée.
- Entrée : un ensemble de questions fermées (plusieurs formats proposés : HuggingFace, JSON, TSV)
- Sortie : le nombre de réponses compris entre 1 et 5
- Evaluation : Précision et score F1
 
Il s'agira, dans cette édition de DEFT, de travailler sur une tâche originale de question-réponse, intégrant cette inconnue sur le nombre de réponses associées aux questions, et dont la difficulté permettra aux équipes participantes d'explorer et de proposer des approches pouvant s'écarter de celles actuellement proposées pour des tâches plus classiques.
 
*Calendrier*
-   Inscription : à partir de maintenant, jusqu'au début de la phase de test ; envoi d'un mail avec l'ensemble des participants de l'équipe à deft-2023@listes.univ-avignon.fr. Un accord d'utilisation des données doit être fourni. Il est disponible en ligne : https://uncloud.univ-nantes.fr/index.php/s/e7cNAsmECWCmjH9
-   Diffusion des corpus d'entraînement : 27 février 2022
-   Test : du 24 au 30 avril 2023 pour les deux tâches
-   Soumission des articles de description des systèmes : 8 mai 2023 (première version), 12 mai 2023 (version finale) 
-   Atelier : 5 juin 2023 pendant la conférence TALN 2023 à Paris
 
L'accès aux données ne sera rendu possible qu'après signature d'un accord d'utilisation des données par l'ensemble des membres de l'équipe. En accédant aux données, ils s'engagent moralement à participer jusqu'au bout (soumettre des résultats et présenter les résultats pendant l'atelier).
 
*Contact* : deft-2023@listes.univ-avignon.fr
 
*Site internet* : https://deft2023.univ-avignon.fr/
 
*Comité scientifique*
- Nathalie Camelin (LIUM, Le Mans Université)
- Liana Ermakova (HCTI, Université de Bretagne Occidentale)
- Benoit Favre (LIS, Aix-Marseille Université)
- Corinne Fredouille (LIA, Avignon Université)
- Pierre-Antoine Gourraud (CHU de Nantes)
- Natalia Grabar (STL, CNRS, Université de Lille)
- Cyril Grouin (LISN, CNRS, Université Paris-Saclay)
- Pierre Jourlin (LIA, Avignon Université)
- Fleur Mougin (ISPED, Université de Bordeaux)
- Aurélie Névéol (LISN, CNRS, Université Paris Saclay)
- Didier Schwab (LIG, Grenoble Alpes Université)
- Pierre Zweigenbaum (LISN, CNRS, Université Paris-Saclay)
 
*Comité d'organisation*
- Adrien Bazoge (LS2N, Nantes Université)
- Béatrice Daille (LS2N, Nantes Université)
- Richard Dufour (LS2N, Nantes Université)
- Yanis Labrak (LIA, Avignon Université et Zenidoc)
- Emmanuel Morin (LS2N, Nantes Université)
- Mickael Rouvier (LIA, Avignon Université)
Back  Top

3-3-12(2023-06-12) 13th International Conference on Multimedia Retrieval, Thessaloniki, Greece

ICMR2023 – ACM International Conference on Multimedia Retrieval

https://icmr2023.org/

Back  Top

3-3-13(2023-06-12) ACM ICMR 2023 Doctoral Symposium, Call for papers, Thessaloniki, Greece

====ACM ICMR 2023 Doctoral Symposium, Call for papers======== 

ACM ICMR 2023 https://icmr2023.org doctoral symposium plans to bring together Ph.D. students working on topics aligned with the topics of this year conference:

  • Multimedia content-based search and retrieval,
  • Multimedia-content-based (or hybrid) recommender systems,
  • Large-scale and Web-scale multimedia retrieval,
  • Multimedia content extraction, analysis, and indexing,
  • Multimedia analytics and knowledge discovery,
  • Multimedia machine learning, deep learning, and neural networks,
  • Relevance feedback, active learning, and transfer learning,
  • Fine-grained retrieval for multimedia,
  • Event-based indexing and multimedia understanding,
  • Semantic descriptors and novel high- or mid-level features,
  • Crowdsourcing, community contributions, and social multimedia,
  • Multimedia retrieval leveraging quality, production cues, style, framing, and affect,
  • Synthetic media generation and detection,
  • Narrative generation and narrative analysis,
  • User intent and human perception in multimedia retrieval,
  • Query processing and relevance feedback,
  • Multimedia browsing, summarization, and visualization,
  • Multimedia beyond video, including 3D data and sensor data,
  • Mobile multimedia browsing and search,
  • Multimedia analysis/search acceleration, e.g., GPU, FPGA,
  • Benchmarks and evaluation methodologies for multimedia analysis/search,
  • Privacy-aware multimedia retrieval methods and systems,
  • Fairness and explainability in multimedia analysis/search,
  • Legal, ethical and societal impact of multimedia retrieval research,
  • Applications of multimedia retrieval, e.g., news/journalism, media, medicine, sports, commerce, lifelogs, travel, security, and environment.

We encourage contributions from students working in the full space of these topics, which is defined by dimensions including:

  • Content: image, video, music, spoken audio, sensor data;
  • Tasks: retrieval, recommendation, summarization, multimedia mining;
  • Multiple modalities: multimodal fusion, cross-media retrieval;
  • Algorithms: memory-based, rule-based, model-based, deep learning;
  • Retrieval pipeline: hashing, indexing, representation, similarity metrics, query interpretation, results presentation;
  • Interaction: relevance feedback, conversational and emotional interfaces;
  • Relevance Criteria: topic, style, quality, intent;
  • Challenges: large-scale data, fine-grained retrieval, evaluation, interfaces, crowdsourcing, privacy, new applications.

The doctoral symposium will take place during the main conference in a dedicated oral session. The goal is to provide a forum for Ph.D. students to present ongoing research in a collaborative environment and to share ideas with other renowned and experienced researchers. Participants will discuss their research ideas and results, and they will receive constructive feedback from an audience consisting of peers as well as more senior people. It will be an excellent opportunity for developing person-to-person networks to the benefit of the Ph.D. students in their future careers and also of the community.

The Ph.D. students of the accepted doctoral symposium papers coming at the conference to present solely their doctoral symposium paper will be entitled to a student registration fee for the entire conference.

Eligibility

Prospective student attendees should already have a clear direction for research, and possibly have published some results. Preference will be given to students who have advanced to Ph.D. candidacy.

Maximum Length of a Paper

Each doctoral symposium paper should not be longer than 4 pages.

Important Dates

  • Paper Submission Due: February 17, 2023
  • Notification of Acceptance: March 31, 2023
  • Camera-Ready Papers Due: April 20, 2023

Single-Blind Review

ACM ICMR will use a single-blind review process for doctoral symposium paper selection. Authors should provide author names and affiliations in their manuscript. Selections will be based on the submitted 4 pages paper, singly authored by the student wishing to attend. Submissions will be reviewed by the Doctoral Symposium Committee (appointed by the Doctoral Symposium Chairs). Accepted proposals will be published in the conference proceedings. Doctoral students who submit to the Doctoral Symposium are encouraged to submit a paper on their research to the main conference. However, acceptance for participation in the Doctoral Symposium will be based solely on the paper written ad-hoc for the event. All papers will be reviewed with respect to overall quality of presentation, potential for future impact of the research on the field, and expected benefit to the other doctoral students attending the conference.

Submission Instructions

Applications to the Doctoral Symposium should include a 4 pages paper summarizing the applicant’s dissertation research. The paper should include:

  • Abstract and the keywords;
  • Motivation, problem description;
  • Background and related work (including key references);
  • Novelty and significance relative to the state of the art;
  • Approach, data, methods and proposed experiments;
  • Results obtained and work in progress;
  • Specific research issues for discussion at the Doctoral Symposium.

In addition to the paper, the applicants are expected to provide a 1 page appendix that should describe the benefits that would be obtained by attending the Doctoral Symposium, including:

  • A statement by the student saying why they want to attend the Symposium;
  • A statement by their advisor saying how the student would benefit by attending the Symposium.

Advisors should also specifically state whether the student has written, or is close to completing, a thesis proposal (or equivalent), and when they expect the student would defend their dissertation if they progress at a typical rate.

The appendix should be uploaded as a separated file. See the Paper Submission section.

Contact

For any questions regarding demo submissions, please email the Doctoral Symposium Chairs:

Aisling Kelliher  aislingk@vt.edu

Jenny Benois-Pineau jenny.benois-pineau@u-bordeaux.fr

 

Back  Top

3-3-14(2023-06-12) ETAL 2023 Ecole d'été en Traitement automatique des langues, Marseille, France
ETAL 2023 : École d’été en Traitement Automatique des Langues
---------------------------------------------------------------------------------------

L’école d'été en traitement automatique des langues (ETAL) se déroulera du 12 au 16 juin
2023 au Centre International de Rencontres en Mathématiques (CIRM) à Marseille. Cette
école, soutenue par le CNRS et le GdR TAL, s'adresse aux doctorant.e.s, chercheur.se.s et
industriel.le.s qui souhaitent améliorer leur compréhension et leur maîtrise du
traitement automatique des langues et de ses applications. Elle regroupe des cours et des
mises en pratique, donnés par des membres de la communauté, sur l'historique du domaine,
les modèles actuels ainsi que les enjeux éthiques et sociétaux du TAL.

 - Dates : du 12 au 16 juin 2023
 - Lieu : CIRM, campus de Luminy, Marseille
 - Tarif : 550 euros hébergement et repas inclus
 - Pré-inscription obligatoire (nombre de places limité) :
https://framaforms.org/pre-inscription-a-etal-2023-1671138981
 - Plus d'informations : https://etal2023.lis-lab.fr (site bientôt disponible)

## Programme
4,5 jours de cours magistraux et de travaux pratiques (50% cours, 50% TP) divisés en
modules fondamentaux et applicatifs présentant les notions essentielles et les dernières
avancées en TAL :
- Concepts et méthodologie.
- Apprentissage statistique et approches neuronales.
- Éthique, reproductibilité, bonnes pratiques du domaine.
- Développement du langage et TAL : le point de vue des sciences cognitives.
- TAL multimodal et interactions.
- Une conférence invitée sera proposée pour mettre en lumière la vision du domaine d’un.e
chercheur.se prominent.e dans le domaine
- Hackathon optionnel

## Prérequis
Formation de niveau Master avec une composante informatique et mathématiques
(algorithmique, programmation en langage Python, bases de l'algèbre linéaire,
probabilités et statistiques, etc).

## Intervenant.e.s
- Alexandre Allauzen, Université Paris Sciences et Lettres
- Yannick Estève, Université d’Avignon
- Benoit Favre, Aix-Marseille Université
- Karën Fort, Sorbonne Université
- Abdellah Fourtassi, Aix-Marseille Université
- Hermann Ney, RWTH Aachen
- Magalie Ochs, Aix-Marseille Université
- Laure Soulier, Sorbonne Université
- Xavier Tannier, Sorbonne Université

 ## Cadre
L'école aura lieu au Centre International de Rencontres Mathématiques (CIRM), dans le
campus universitaire de Luminy à Marseille. Ce centre offre, en plus d'héberger l'école,
le logement en pension complète des participant.e.s (inclus dans le tarif d'inscription).
Il est localisé dans le Parc Naturel des Calanques de Marseille et offre un cadre unique
et attractif, propice à l'étude et à la réflexion.

## Bourses
Vous pourrez demander une aide financière auprès des organisateur.ice.s (le processus
sera précisé prochainement sur le site).

Note : cette école n'est pas ouverte aux étudiant.e.s de Master; les industriel.le.s
participants doivent être membres du club des partenaires du GdR TAL
(https://gdr-tal.ls2n.fr/club-des-partenaires/).
 
Back  Top

3-3-15(2023-06-15) JEP 2023, Toulouse, France

JPC 2023 - Toulouse du 15 au 17 juin 2023

Ouverture des inscriptions / Tarifs réduits jusqu'au 14 avril inclus

https://www.irit.fr/jpc2023/

 

Chères et Chers collègues,

L?Université de Toulouse Jean Jaurès (campus Mirail : https://www.univ-tlse2.fr/) a le plaisir de vous accueillir pour les 9èmes Journées de Phonétique Clinique (JPC2023) du 15 au 17 juin 2023.
 
Vous pouvez maintenant vous inscrire, le plus vite possible pour pouvoir bénéficier des tarifs réduits jusqu?au 14 avril inclus, en suivant ce lien : https://www.irit.fr/jpc2023/inscriptions/
 
Le programme sera diffusé très prochainement.
 
Au plaisir de vous accueillir à Toulouse !
 
 
 
Le comité d?organisation des JPC 2023
Back  Top

3-3-16(2023-06-15) Journées de Phonétique Clinique (JPC 2023), Toulouse, France

JPC 2023 - Toulouse du 15 au 17 juin 2023

https://www.irit.fr/jpc2023/


3e Appel à Communication

Depuis leur création en 2005, les Journées de Phonétique Clinique (JPC) ont été régulièrement organisées sur une base bisannuelle. Après une dernière édition organisée en Belgique par nos collègues du Laboratoire de phonétique de l’Université de Mons en 2019 (sous l’égide de l’Institut de Recherche en Sciences et Technologies du Langage), les JPC reviennent en France en 2023 (après annulation en 2021) pour leur 9e édition. Co-organisée par l’Institut de Recherche en Informatique de Toulouse (IRIT), le laboratoire de Neuro-Psycho-Linguistique (LNPL) et le Centre Hospitalo-Universitaire de Toulouse ainsi que par le Laboratoire Informatique d’Avignon (LIA), la manifestation se tiendra à l’Université de Toulouse du 15 au 17 juin 2023.

Rencontre scientifique internationale, les Journées de Phonétique Clinique sont principalement destinées à rassembler et à favoriser les échanges entre chercheurs, cliniciens, informaticiens, ingénieurs, phonéticiens et tout autre professionnel s’intéressant au fonctionnement de la parole, de la voix et du langage. Les JPC accueillent autant les experts que les jeunes chercheurs et les étudiants des domaines cliniques (médecine, orthophonie/logopédie), psychologique, informatique et des sciences du langage.

La production et la perception de la parole, de la voix et du langage de l’enfant et de l’adulte, sain ou atteint d’une pathologie, sont les domaines de prédilection des JPC. Ils y sont ainsi abordés selon des points de vue variés, permettant le partage des savoirs et l’ouverture de nouvelles pistes de réflexion, de recherche et de collaboration.

Lors de cette neuvième édition, la thématique des mesures de la parole sera mise en avant. Elle s’inscrit dans un cadre conceptuel dont les facettes sont multiples : analyses perceptives, traitement automatique du signal, caractérisations de l’intelligibilité, du trouble de la parole, des di/ysfluences atteignant le débit de la parole, la prosodie… Sa pertinence clinique est essentielle : l’évaluation du trouble, de ses conséquences fonctionnelles et de l’impact sur la qualité de vie est primordiale pour le suivi des patients atteints de pathologies neurologiques, cancérologiques…

Trois conférences plénières seront prévues autour du thème des journées. Une table ronde ainsi que des ateliers feront également partie du programme de cette nouvelle édition. 

Les propositions de communication (résumé de 400 mots, hors titre, auteurs et références) porteront sur les problématiques suivantes (liste non exhaustive) :

  • Parole et perturbations des systèmes perceptifs, auditifs et visuels
  • Modélisation de la parole et de la voix pathologiques
  • Perturbations du système oro-pharyngo-laryngé
  • Évaluation fonctionnelle de la parole, du langage et de la voix.
  • Diagnostic et traitement des troubles de la parole et de la voix parlée et chantée
  • Instrumentation et ressources en phonétique clinique
  • Troubles cognitifs et moteurs de la parole et du langage

Une attention particulière sera portée aux propositions ciblant la thématique autour des mesures de la parole.                                       

Dates importantes : 

- 20 janvier 2023 → Date limite de soumission des résumés via SciencesConf : https://jpc2023.sciencesconf.org/
- 30 mars 2023 → Notification aux auteurs
- du 30 mars au 15 mai 2023 → Inscriptions au tarif réduit
- 15 mai 2023 → Version finale des résumés
- Du 15 au 17 juin 2023 → Journées 

Téléchargez le flyer des JPC'2023 pour diffusion dans vos labos, sociétés savantes, ... 

Back  Top

3-3-17(2023-06-20) 15th International Conference on Computational Semantics (IWCS), Nancy, France

===== CFP deadline extension IWCS 2023 =====

      Paper submissions:
        15 March --> 22 March 2023
      https://softconf.com/iwcs2023/papers

============================================

15th International Conference on Computational Semantics (IWCS)

Université de Lorraine, Nancy, France

20-23th June 2023

      http://iwcs2023.loria.fr/


IWCS is the biennial meeting of SIGSEM [1], the ACL special interest
group on semantics [2]; this year's edition is organized in person by the
Loria [3] and IDMC [4] of the Université de Lorraine.

      [1] http://sigsem.org/
      [2] http://aclweb.org/
      [3] https://www.loria.fr/fr/
      [4] http://idmc.univ-lorraine.fr/

The aim of the IWCS conference is to bring together researchers
interested in any aspects of the computation, annotation, extraction,
representation and neuralisation of meaning in natural language,
whether this is from a lexical or structural semantic perspective.
IWCS embraces both symbolic and machine learning approaches to
computational semantics, and everything in between. The conference
and workshops will take place 20-23 June 2023.


=== TOPICS OF INTEREST ===

We invite paper submissions in all areas of computational semantics, in
other words all computational aspects of meaning of natural language within
written, spoken, signed, or multi-modal communication.

Presentations will be oral and posters.

Submissions are invited on these closely related areas, including the
following:

* design of meaning representations
* syntax-semantics interface
* representing and resolving semantic ambiguity
* shallow and deep semantic processing and reasoning
* hybrid symbolic and statistical approaches to semantics
* distributional semantics
* alternative approaches to compositional semantics
* inference methods for computational semantics
* recognising textual entailment
* learning by reading
* methodologies and practices for semantic annotation
* machine learning of semantic structures
* probabilistic computational semantics
* neural semantic parsing
* computational aspects of lexical semantics
* semantics and ontologies
* semantic web and natural language processing
* semantic aspects of language generation
* generating from meaning representations
* semantic relations in discourse and dialogue
* semantics and pragmatics of dialogue acts
* multimodal and grounded approaches to computing meaning
* semantics-pragmatics interface
* applications of computational semantics


=== SUBMISSION INFORMATION ===


Two types of submission are solicited: long papers and short papers. Both
types should be submitted not later than 3 March (anywhere on earth).

Long papers should describe original research and must not exceed 8 pages
(not counting acknowledgements and references).
Short papers (typically system or project descriptions, or ongoing research)
must not exceed 4 pages (not counting acknowledgements and references).

Both types will be published in the conference proceedings and in the ACL
Anthology. Accepted papers get an extra page in the camera-ready version.

Style-files:

IWCS papers should be formatted following the common two-column structure
as used by ACL. Please use our specific style-files or the Overleaf template, taken
from ACL 2021. Similar to ACL 2021, initial submissions should be fully anonymous
to ensure double-blind reviewing.

Submitting:

Papers should be submitted in PDF format via Softconf:

https://softconf.com/iwcs2023/papers

Please make sure that you select the right track when submitting your paper.
Contact the organisers if you have problems using Softconf.

No anonymity period

IWCS 2023 does not have an anonymity period. However, we ask you to be
reasonable and not publicly advertise your preprint during (or right before) review.


=== IMPORTANT DATES ===

15 March --> 22 March 2023 (anywhere on earth) Paper submissions

17 April 2023 Decisions sent to authors

15 May 2023 Camera-ready papers due

20-23 June 2023 IWCS conference


=== CONTACT ===

For questions, contact: iwcs2023-contact@univ-lorraine.fr


Maxime Amblard, Ellen Breithloltz (the IWCS 2023 organizers) 

Back  Top

3-3-18(2023-06-20) CfP ISA-19, 2023 Joint ACL-ISO Workshop on Interoperable Semantic Annotation, Nancy, France
CALL FOR PAPERS
---------------------------
 
ISA-19, 2023 Joint ACL - ISO Workshop on Interoperable Semantic Annotation
 
Workshop at the 2023 International Conference on Computational Semantics (IWCS 2023, https://iwcs2023.loria.fr), Nancy, France, June 20-23
 
Submission date: April 12, 2023
 
 
ISA-19 will be the 2023 edition of a series of joint workshops of the ACL Special Interest Group in Semantics (SIGSEM) and the International Organisation for Standardisation ISO. The latest editions were held as part of the IWCS conference 2021 (ISA-17), and of the LREC 2022 conference in Marseille (ISA-18). 
 
ISA workshops bring together researchers who produce and consume annotations and representations of semantic information as expressed in text, speech, gestures, graphics, video, images, and combinations of multiple modalities are combined. Examples of semantic annotation include the markup of events, time, space, dialogue acts, discourse relations, semantic roles, coreference, space and motion, quantification, visualisation and motion, and people and 3D objects participating in activities and events. The ISO organisation pursues the establishment and exploitation of standardised annotation methods and representation schemes in these and related areas, in support of the creation of interoperable semantic resources. The ISA workshops provide a forum for researchers to identify and discuss challenges in effective interoperable semantic annotation and to critically examine and compare existing approaches and frameworks.
 
 
SUBMISSION DETAILS:
 
Topics for submissions include, but are not limited to:
 
* methodological aspects of semantic annotation
* design and evaluation of semantic annotation schemas
* innovative methods for automated and manual annotation
* context-aware annotation learning
* integration of semantic annotation and other linguistic annotations
* considerations for merging annotations of different phenomena
* multi-layered annotations and representations
* semantic annotation, representation, and their interrelatedness
* levels of granularity in annotation schemes
* use of context in semantic annotation processes
* uncertainty and ambiguity in annotations
* semantic annotation and ontologies
* comparison of semantic annotation schemes
* annotator agreement and other metrics for evaluating semantic annotations
* qualitative evaluation of semantic annotations 
* experiments in semantic annotation
* applications of semantic annotation
* best practices for semantic annotation procedures
* semantic annotation, interpretation, and inference
* application and evaluation of standards for semantic annotation
* language- or application-specific aspects of semantic annotation 
* capturing semantic information in images and video
* issues in the annotation of specific domains of semantic information, such as:
 - events, states, processes, circumstances, facts
 - space, time, motion events, and 3D objects as participants
 - relations in discourse and dialogue
 - modality, polarity and factuality 
 - quantification and modification
 - coreference relations
 - semantic roles and predicate-argument structures
 - reference and named entities
 - attribution, sentiment, attitudes and emotions
 
Two types of submission are invited:
 
1. Research papers, describing original research; these can be either: 
   a. long (6-8 pages, with additional pages for references if needed) or 
   b. short (3-5 pages plus references);
2. Project notes, describing recent, ongoing or planned projects (3-5 pages including references).
 
Submission of papers is in PDF form through the ISA-19 submission site.
All submissions should be formatted using the IWCS 2023 instructions for submitting papers. 
 
IMPORTANT DATES:
April 12: Submission deadline
April 25: Notification of acceptance
May 15: Camera-ready submission
June 20: Workshop
 
 
ORGANISING COMMITTEE: 
Harry Bunt
Nancy Ide
Kiyong Lee
Volha Petukhova
James Pustejovsky
Laurent Romary
 
 
PROGRAMME COMMITTEE (t.b.c.):
Jan Alexandersson
Ron Artstein
Johan Bos
Harry Bunt (chair)
Stergios Chatzykriakidis
Jae-Woong Choe
Robin Cooper
Ludivine Crible
Rodolfo Delmonte
David DeVault
Simon Dobnik
Jens Edlund
Alex Fang
Robert Gaizauskas
Kallirroi Georgila
Koiti Hasida
Nancy Ide
Elisabetta Jezek
Nikhil Krishnaswamy
Kiyong Lee
Paul Mc Kevitt
Philippe Muller
Rainer Osswald
Catherine Pelachaud
Guy Perrier
Volha Petukhova 
Massimo Poesio
Andrei Popescu-Belis
Laurent Prevot
Stephen Pulman
Matthew Purver
James Pustejovsky
Laurent Romary
Purificação Silvano
Matthew Stone
Thorsten Trippel
Carl Vogel
Menno van Zaanen
Annie Zaenen
Heike Zinsmeister
 
 
MORE INFORMATION
For the latest information see the workshop page at https://sigsem.uvt.nl/isa19/; for any questions contact the workshop chair Harry Bunt (harry.bunt@tilburguniversity.edu).
 
Back  Top

3-3-19(2023-06-20) InqBnB4 workshop: Inquisitiveness Below and Beyond the Sentence Boundary, Nancy, France

InqBnB4 workshop: Inquisitiveness Below and Beyond the Sentence Boundary

Nancy (France), 20 June 2023, hosted by IWCS 2023

https://iwcs2023.loria.fr/inqbnb4-inquisitiveness-below-and-beyond-the-sentence-boundary/

InqBnB is a workshop series bringing together researchers interested in the semantics and
pragmatics of interrogatives (questions or embedded interrogative clauses). This series was
originally organized by the Inquisitive Semantics Group of the Institute for Logic, Language
and Computation (ILLC) from the University of Amsterdam. As such, the focus point mainly
revolves around analyses using or related to inquisitive semantics.

After three successful editions in the Netherlands, we hope to open the inquisitive community
to a wider audience. The 4th edition is planned on 20 June 2023, just before IWCS 2023
(Internation Conference on Computational Semantics). As invited speakers we are welcoming
Wataru Uegaki (University of Edinburgh) and Todor Koev (Universität Konstanz).

InqBnB4 invites submissions on original and unpublished research focussed on the properties
of inquisitive content. We are mainly interested in theoretical questions, formal models and
empirical work. But we are also welcoming papers based on statistical or neural models,
provided their main goal is to bring new insights regarding inquisitiveness.

Here are some examples of questions of interest:
 * Which operators (connectives, quantifiers, modals, conditionals) generate inquisitiveness?
 * How do these operators project the inquisitive content of their arguments?
  * e.g. what triggers maximality, exhaustivity or uniqueness of readings?
 * How does inquisitive content interact with informative content in compositional semantics?
  * e.g. how do interrogative words interact with negative polarity items, free choice items,
      indefinites or plurality?
 * How do conventions of use interact with inquisitive content?
  * e.g. how can non-answering responses (e.g. clarification questions) be handled?
 * In which ways is pragmatics sensitive to inquisitive content?
  * e.g. how does answer bias and ignorance inferences arise?
 * What kind of discourse anaphora are licensed by inquisitive expressions?
  * e.g. does dynamic inquisitive semantics manage to correctly derive donkey anaphora?

*Submission:*
Submission link on SoftConf:
https://softconf.com/iwcs2023/inqbnb4/

Sumitted papers must not exceed eight (8) pages (not counting acknowledgement,
references and appendices). Accepted papers get an extra page in the camera-ready version.
Submitted papers should be formatted following the common two-column structure as used by
ACL. Please use the specific style-files or the Overleaf template for IWCS 2023, taken from
ACL 2021. Initial submissions should be fully anonymous to ensure double-blind reviewing.
The proceedings will be published in the ACL anthology.

*Important dates:*
 * Submission deadline: 14 April
 * Author notification: 12 May
 * Camera ready: 9 June
 * Workshop day: 20 June

*Organizers:*
 * Valentin D. Richard [1], Loria, Université de Lorraine
 * Philippe de Groote [2], Loria, INRIA Nancy – Grand Est
 * Floris Roelofsen [3], ILLC, Universiteit van Amsterdam

*Programme committee:*
 * Local chair: Valentin D. Richard, Université de Lorraine
 * Chair: Floris Roelofsen, Universiteit van Amsterdam
 * Maria Aloni [11], Universiteit van Amsterdam
 * Lucas Champollion [4], New York University (NYU)
 * Jonathan Ginzburg [5], Université Paris Cité
 * Philippe de Groote [2], INRIA Nancy – Grand Est
 * Todor Koev [12], Universität Konstanz
 * Jakub Dotlačil [6], Universiteit Utrecht
 * Reinhard Muskens [7], Universiteit van Amsterdam
 * Maribel Romero [8], Universität Konstanz
 * Wataru Uegaki [9], University of Edinburgh
 * Yimei Xiang [10], Rutgers Linguistics

[1] https://valentin-d-richard.fr/
[2] https://members.loria.fr/PdeGroote/
[3] https://www.florisroelofsen.com/
[4] https://champollion.com/
[5] http://www.llf.cnrs.fr/fr/Gens/Ginzburg
[6] http://www.jakubdotlacil.com/
[7] http://freevariable.nl/
[8] https://ling.sprachwiss.uni-konstanz.de/pages/home/romero/
[9] https://www.wataruuegaki.com/
[10] https://yimeixiang.wordpress.com/
[11] https://www.marialoni.org/
[12] https://todorkoev.weebly.com/

Back  Top

3-3-20(2023-06-20) SIG Workshop on Speech Prosody and Beyond, Seoul, South Korea

Hae-Sung Jeon and colleagues are organizing a workshop titled Speech Prosody and Beyond,

June 20-23 in Seoul, with abstracts due February 5. 

Details are at https://ukskprosodynetwork.github.io/ .

 

Back  Top

3-3-21(2023-07-15) MLDM 2023 : 18th International Conference on Machine Learning and Data Mining, New York,NY, USA

MLDM 2023 : 18th International Conference on Machine Learning and Data Mining
http://www.mldm.de
 
When    Jul 16, 2023 - Jul 21, 2023
Where    New York, USA
Submission Deadline    Jan 15, 2023
Notification Due    Mar 18, 2023
Final Version Due    Apr 5, 2023
Categories:    machine learning   data mining   pattern recognition   classification
 
Call For Papers
MLDM 2023
18th International Conference on Machine Learning and Data Mining
July 15 - 19, 2023, New York, USA

The Aim of the Conference
The aim of the conference is to bring together researchers from all over the world who deal with machine learning and data mining in order to discuss the recent status of the research and to direct further developments. Basic research papers as well as application papers are welcome.

Chair
Petra Perner Institute of Computer Vision and Applied Computer Sciences IBaI, Germany

Program Committee
Piotr Artiemjew University of Warmia and Mazury in Olsztyn, Poland
Sung-Hyuk Cha Pace Universtity, USA
Ming-Ching Chang University of Albany, USA
Mark J. Embrechts Rensselaer Polytechnic Institute and CardioMag Imaging, Inc, USA
Robert Haralick City University of New York, USA
Adam Krzyzak Concordia University, Canada
Chengjun Liu New Jersey Institute of Technology, USA
Krzysztof Pancerz University Rzeszow, Poland
Dan Simovici University of Massachusetts Boston, USA
Agnieszka Wosiak Lodz University of Technology, Poland
more to be annouced...


Topics of the conference

Paper submissions should be related but not limited to any of the following topics:

Association Rules
Audio Mining
Autoamtic Semantic Annotation of Media Content
Bayesian Models and Methods
Capability Indices
Case-Based Reasoning and Associative Memory
case-based reasoning and learning
Classification & Prediction
classification and interpretation of images, text, video
Classification and Model Estimation
Clustering
Cognition and Computer Vision
Conceptional Learning
conceptional learning and clustering
Content-Based Image Retrieval
Control Charts
Decision Trees
Design of Experiment
Desirabilities
Deviation and Novelty Detection
Feature Grouping, Discretization, Selection and Transformation
Feature Learning
Frequent Pattern Mining
DSTC11.T4)

Call for Participation

TRACK GOALS AND DETAILS: Two main goals and tasks:
•    Task 1: Propose and develop effective Automatic Metrics for evaluation of open-domain multilingual dialogs.
•    Task 2: Propose and develop Robust Metrics for dialogue systems trained with back translated and paraphrased dialogs in English.


EXPECTED PROPERTIES OF THE PROPOSED METRICS:
•    High correlation with human annotated assessments.
•    Explainable metrics in terms of the quality of the model-generated responses.
•    Participants can propose their own metric or optionally improve the baseline evaluation metric deep AM-FM (Zhang et al, 2020).

DATASETS:
For training: Up to 18 Human-Human curated multilingual datasets (+3M turns), with turn/dialogue level automatic annotations as toxicity or sentiment analysis, among others.
Dev/Test: Up to 10 Human-Chatbot curated multilingual datasets (+150k turns), with turn/dialogue level human annotations including QE metrics or cosine similarity.
Data translated and back-translated into several languages (English, Spanish and Chinese). Also, there are several paraphrases with annotations for each dataset.

BASELINE MODEL:
The default choice is Deep AM-FM (Zhang et al, 2020). This model has been adapted to be able to evaluate multilingual datasets, as well as to work with paraphrased and back translated sentences.

REGISTRATION AND FURTHER INFORMATION:
ChatEval: https://chateval.org/dstc11
GitHub: https://github.com/Mario-RC/dstc11_track4_robust_multilingual_metrics

PROPOSED SCHEDULE:
Training/Validation data release: From November to December in 2022
Test data release: Middle of March in 2023
Entry submission deadline: Middle of March in 2023
Submission of final results: End of March in 2023
Final result announcement: Early of April in 2023
Paper submission: From March to May in 2023
Workshop: July-September/2023 in a venue to be announced with DSTC11

ORGANIZATIONS:
Universidad Politécnica de Madrid (Spain)
National University of Singapore (Singapore)
Tencent AI Lab (China)
New York University (USA)
Carnegie Mellon University (USA)
Back  Top

3-3-23(2023-08-07) 20th International Congress of the Phonetic Sciences (ICPhS), Prague, Czech Republic

We would like to welcome you to Prague for the 20th International Congress of the Phonetic Sciences (ICPhS), which takes place on August 7–11, 2023, in Prague, Czech Republic.

 

ICPhS takes place every four years, is held under the auspices of the International Phonetic Association and provides an interdisciplinary forum for the presentation of basic and applied research in the phonetic sciences. The main areas covered by the Congress are speech production, speech acoustics, speech perception, speech prosody, sound change, phonology, sociophonetics, language typology, first and second language acquisition, forensic phonetics, speaking styles, voice quality, clinical phonetics and speech technology.

 

We invite papers on original, unpublished research in the phonetic sciences. The theme of the Congress is “Intermingling Communities and Changing Cultures”. Papers related to this theme are especially encouraged, but we welcome papers related to any of the Congress’ scientific areas. The deadline for abstract submission is December 1, 2002, and for full-paper submission December 8, 2022.

 

We also invite proposals for special sessions covering emerging topics, challenges, interdisciplinary research, or subjects that could foster useful debate in the phonetic sciences. The submission deadline is May 20, 2022.

 

All information is available at https://www.icphs2023.org/, where it is also possible to register for email notifications concerning the congress.

 

Contact: icphs2023@guarant.cz

 

Back  Top

3-3-24(2023-08-07) IPA bursaries for ICPhS

The president of the IPA, Michael Ashby, would like to call attention to the IPA's generous scheme of student awards and travel bursaries for ICPhS. He hopes that many of us will encourage our students to apply.

https://www.internationalphoneticassociation.org/news/202210/ipa-student-awardstravel-bursaries-and-g%C3%B6sta-bruce-scholarships-icphs-2023 <https://www.internationalphoneticassociation.org/news/202210/ipa-student-awardstravel-bursaries-and-g%C3%B6sta-bruce-scholarships-icphs-2023>

Back  Top

3-3-25(2023-08-20) Special session at Interspeech 2023 on DIarization of SPeaker and LAnguage in Conversational Environments [DISPLACE] Challenge.

We would like to bring to your notice the launch of the special session at Interspeech 2023 on DIarization of SPeaker and LAnguage in Conversational Environments [DISPLACE] Challenge.  

 

The DISPLACE challenge entails a first of kind task to perform speaker and language diarization on the same data, as the data contains multi-speaker social conversations in multilingual code-mixed speech. In multilingual communities, social conversations frequently involve code-mixed and code-switched speech. In such cases, various speech processing systems need to perform the speaker and language segmentation before any downstream task. The current speaker diarization systems are not equipped to handle multi-lingual conversations, while the language recognition systems may not be able to handle the same talker speaking in multiple languages within the same recording. 


With this motivation, the DISPLACE challenge attempts to benchmark and improve Speaker Diarization (SD) in multilingual settings and Language Diarization (LD) in multi-speaker settings, using the same underlying dataset. For this challenge, a natural multi-lingual, multi-speaker conversational dataset will be distributed for development and evaluation purposes. There will be no training data given and the participants will be free to use any resource for training the models. The challenge reflects the theme of Interspeech 2023 - 'Inclusive Spoken Language Science and Technology – Breaking Down Barriersin its true sense.  

 

Registrations are open for this challenge which will contain two tracks - a) Speaker diarization track and b) Language diarization track. 

 

A baseline system and an open leaderboard is available to the participants. The DISPLACE challenge is split into two phases, where the first phase is linked to the Interspeech paper submission deadline, while the second phase aligns with the camera ready submission deadline. For more details, dates and to register, kindly visit the DISPLACE challenge website: https://displace2023.github.io/

 

We look forward to your team challenging to 'displace' the state-of-the-art in speaker, language diarization. 

 

Thank you and Namaste,

The DISPLACE team 

 

 

 

 

 

 

Back  Top

3-3-26(2023-08-26) CfP 12th Speech Synthesis Workshop - Grenoble-France
CfP 12th Speech Synthesis Workshop - Grenoble-France - https://ssw2023.org - August 26-28, 2023:
 The Speech Synthesis Workshop (SSW) is the main meeting place for research and innovation in speech synthesis, i.e. predicting speech signals from text input. SSW welcomes contributions not only in the core TTS technology but also papers from contributing sciences: from phoneticians, phonologists, linguists, neuroscientists to experts of multimodal human-machine interaction.
 For more information, please consult: https://ssw2023.org/
 Deadlines:
  • 26 April, 2023 Initial paper submission (at least, title, authors and abstract)
  • 3 May, 2023 Final paper submission (only updates to the PDF are allowed)
 Note also that the data for Blizzard challenge 2023 on French have been releasedhttps://www.synsig.org/index.php/Blizzard_Challenge_2023
 Deadlines:
  • 5 March 2023 Team registration closes
 
Back  Top

3-3-27(2023-08-29) Blizzard Challenge 2023
We are delighted to announce the call for participation in the Blizzard Challenge 2023. This is an open evaluation of corpus-based speech synthesis systems using common datasets and a large listening test.

This year, the challenge will provide a French dataset from two native speakers. The two tasks involve building voices from this data. Please read the full announcement and the rules at:

Please register by following the instructions on the web page.
Important: please send all communications about Blizzard to the official address blizzard-challenge-organisers@googlegroups.com and not to our personal addresses.


Please feel free to distribute this announcement to other relevant mailing lists.

Olivier Perrotin & Simon King
Back  Top

3-3-28(2023-08-30) CfP Sixth IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2023), Singapore

*********************************************
*** Submission Deadline: 19 April 2023 PST ***
*********************************************

CALL FOR PAPERS

Sixth IEEE International Conference on Multimedia Information Processing and Retrieval (MIPR 2023)
30 August - 1 September 2023, Singapore
http://ieee-mipr.org/


The Sixth IEEE International Conference on Multimedia Information Processing
and Retrieval (IEEE MIPR 2023) will take place both physically and virtually,
August 30 ? September 1, 2023, in Singapore. The conference will provide a
forum for original research contributions and practical system design,
implementation, and applications of multimedia information processing and
retrieval.

Topics (Please see http://ieee-mipr.org/call_papers.html).
Topics of interest include, but are not limited to:

1. Multimedia Retrieval
2. Machine Learning/Deep Learning/Data Mining
3. Content Understanding and Analytics
4. Multimedia and Vision
5. Networks for Multimedia Systems
6. Systems and Infrastructures
7. Data Management
8. Novel Applications
9. Internet of Multimedia Things
and others.

Paper Submission:

The conference will accept regular papers (6 pages), short papers (4 pages),
and demo papers (4 pages). Authors are encouraged to compare their approaches,
qualitatively or quantitatively, with existing work and explain the strength
and weakness of the new approaches. We are planning to invite selected
submissions to journal special issues.
Instructions and a link to the submission website are available here:
https://cmt3.research.microsoft.com/MIPR2023

Important Dates (http://ieee-mipr.org/dates.html):
  - Regular Paper (6 pages) and Short Paper (4 pages) Submission Due: April 19, 2023
  - Notification of Decision: May 25, 2023
  - Camera-ready deadline: July 10, 2023
  - Conference Date: August 30-Sep 1, 2023
--
* Ichiro IDE                                           ide@i.nagoya-u.ac.jp  *
* Nagoya University, Graduate School of Informatics                         *
*                       / Mathematical & Data Science Center                 *
*        Phone/Facsimile: +81-52-789-3313                                 *
*         Address: #IB457, 1 Furo-cho, Chikusa-ku, Nagoya 464-8601, Japan *
*         WWW: http://www.cs.is.i.nagoya-u.ac.jp/users/ide/index.html         *

############################

Unsubscribe:

MM-INTEREST-signoff-request@LISTSERV.ACM.ORG

If you don't already have a password for the LISTSERV.ACM.ORG server, we recommend
that you create one now. A LISTSERV password is linked to your email
address and can be used to access the web interface and all the lists to
which you are subscribed on the LISTSERV.ACM.ORG server.

To create a password, visit:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?GETPW1

Once you have created a password, you can log in and view or change your
subscription settings at:

https://LISTSERV.ACM.ORG/SCRIPTS/WA-ACMLPX.CGI?SUBED1=MM-INTEREST

Back  Top

3-3-29(2023-09-04) CfP 26th Intern.Conf. on text, speech and dialogue (TSD 2023), Plzen (Pilsen), Czech Republic

***************************************************************************
                     TSD 2023 - SECOND CALL FOR PAPERS
***************************************************************************

                 Twenty-sixth International Conference on
                   TEXT, SPEECH and DIALOGUE (TSD 2023)

                Pilsen, Czech Republic, 4-7 September 2023
                       http://www.tsdconference.org/


The conference is organized by the Faculty of Applied Sciences, University
of West Bohemia, Plzen (Pilsen) in co-operation with the Faculty of
Informatics, Masaryk University, Brno, and is supported by the
International Speech Communication Association.

Venue: Plzen (Pilsen), Czech Republic


THE IMPORTANT DATES:

Deadline for submission of contributions:                 April 23, 2023
Notification of acceptance or rejection:                 May 22, 2023
Deadline for submission of accepted camera-ready papers: June 4, 2023
TSD 2023:                                                 September 4-7, 2023


TSD SERIES

The TSD series has evolved as a prime forum for interaction between
researchers in both spoken and written language processing from all over
the world. The TSD conference proceedings form a book published by
Springer-Verlag in their Lecture Notes in Artificial Intelligence (LNAI)
series. The TSD proceedings are regularly indexed by Thomson Reuters
Conference Proceedings Citation Index. Moreover, LNAI series is listed in
all major citation databases such as DBLP, SCOPUS, EI, INSPEC or COMPENDEX.


KEYNOTE SPEAKERS (known so far)

* Philippe Blache -- Director of Research at the Laboratoire Parole et
  Langage (LPL), Institute of Language, Communication and the Brain CNRS
  & Aix-Marseille University, France

* Ivan Habernal -- Head of the Trustworthy Human Language Technologies
  (TrustHLT) Group, Department of Computer Science, Technische universitat
  Darmstadt, Germany

* Daniela Braga (negotiations in progress) -- Founder and CEO at
  Defined.ai, Bellevue, Washington, United States


TOPICS

Topics of the conference will include (but are not limited to):

    Corpora and Language Resources (monolingual, multilingual, text and spoken
    corpora, large web corpora, disambiguation, specialized lexicons,
    dictionaries)

    Speech Recognition (multilingual, continuous, emotional speech, handicapped
    speaker, out-of-vocabulary words, alternative way of feature extraction,
    new models for acoustic and language modelling)

    Tagging, Classification and Parsing of Text and Speech (morphological and
    syntactic analysis, synthesis and disambiguation, multilingual processing,
    sentiment analysis, credibility analysis, automatic text labeling,
    summarization, authorship attribution)

    Speech and Spoken Language Generation (multilingual, high fidelity speech
    synthesis, computer singing)

    Semantic Processing of Text and Speech (information extraction, information
    retrieval, data mining, semantic web, knowledge representation, inference,
    ontologies, sense disambiguation, plagiarism detection, fake news
    detection)

    Integrating Applications of Text and Speech Processing (machine
    translation, natural language understanding, question-answering strategies,
    assistive technologies)

    Automatic Dialogue Systems (self-learning, multilingual, question-answering
    systems, dialogue strategies, prosody in dialogues)

    Multimodal Techniques and Modelling (video processing, facial animation,
    visual speech synthesis, user modelling, emotions and personality
    modelling)

Papers dealing with text and speech processing in linguistic environments
other than English are strongly encouraged (as long as they are written in
English).


PROGRAM COMMITTEE

Elmar Noth, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Germany (General Chairman)
Rodrigo Agerri, University of the Basque Country, Spain
Eneko Agirre, University of the Basque Country, Spain
Vladimir Benko, Slovak Academy of Sciences, Slovakia
Archna Bhatia, Carnegie Mellon University, United States
Jan Cernocky, Brno University of Technology, Czechia
Simon Dobrisek, University of Ljubljana, Slovenia
Kamil Ekstein, University of West Bohemia, Czechia
Karina Evgrafova, Saint-Petersburg State University, Russia
Yevhen Fedorov, Cherkasy State Technological University, Ukraine
Volker Fischer, EML Speech Technology GmbH, Germany
Darja Fiser, Institute of Contemporary History, Slovenia
Lucie Flek, Philipps-Universitat Marburg, Germany
Bjorn Gamback, Norwegian University of Science and Technology, Norway
Radovan Garabik, Slovak Academy of Sciences, Slovakia
Alexander Gelbukh, Instituto Politecnico Nacional, Mexico
Louise Guthrie, University of Texas at El Paso, United States
Tino Haderlein, Friedrich-Alexander-Universitat Erlangen-Nurnberg, Germany
Jan Hajic, Charles University, Czechia
Eva Hajicova, Charles University, Czechia
Yannis Haralambous, IMT Atlantique, France
Hynek Hermansky, Johns Hopkins University, United States
Jaroslava Hlavacova, Charles University, Czechia
Ales Horak, Masaryk University, Czechia
Eduard  Hovy, Carnegie Mellon University, United States
Denis Jouvet, Inria, France
Maria Khokhlova, Saint Petersburg State University, Russia
Aidar Khusainov, Tatarstan Academy of Sciences, Russia
Daniil Kocharov, Saint Petersburg State University, Russia
Miloslav Konopik, University of West Bohemia, Czechia
Ivan Kopecek, Masaryk University, Czechia
Valia Kordoni, Humboldt University of Berlin, Germany
Evgeny Kotelnikov, Vyatka State University, Russia
Pavel Kral, University of West Bohemia, Czechia
Siegfried Kunzmann, Amazon Alexa Machine Learning, United States
Nikola Ljubesic, Jozef Stefan Institute, Croatia
Natalija Loukachevitch, Lomonosov Moscow State University, Russia
Bernardo Magnini , Fondazione Bruno Kessler, Italy
Oleksandr Marchenko, Taras Shevchenko National University of Kyiv, Ukraine
Vaclav Matousek, University of West Bohemia, Czechia
Roman Moucek, University of West Bohemia, Czechia
Agnieszka  Mykowiecka, Polish Academy of Sciences, Poland
Hermann Ney, RWTH Aachen University, Germany
Joakim Nivre, Uppsala University, Sweden
Juan Rafael  Orozco-Arroyave, University of Antioquia, Colombia
Karel Pala, Masaryk University, Czechia
Maciej Piasecki, Wroclaw University of Science and Technology, Poland
Josef Psutka, University of West Bohemia, Czechia
James  Pustejovsky, Brandeis University, United States
German Rigau, University of the Basque Country, Spain
Paolo Rosso, Universitat Politecnica de Valencia, Spain
Leon Rothkrantz, Delft University of Technology, Netherlands
Anna Rumshisky, University of Massachusetts Lowell, United States
Milan Rusko, Slovak Academy of Sciences, Slovakia
Pavel Rychly, Masaryk University, Czechia
Mykola Sazhok, International Research and Training Center for Information Technologies and Systems, Ukraine
Odette Scharenborg, Delft University of Technology, Netherlands
Pavel Skrelin, Saint Petersburg State University, Russia
Pavel Smrz, Brno University of Technology, Czechia
Petr Sojka, Masaryk University, Czechia
Georg Stemmer, Intel Corp., Germany
Marko Robnik Sikonja, University of Ljubljana, Slovenia
Marko Tadic, University of Zagreb, Croatia
Jan Trmal, Johns Hopkins University, Czechia
Tamas Varadi, Hungarian Academy of Sciences, Hungary
Zygmunt Vetulani, Adam Mickiewicz University, Poland
Aleksander Wawer, Polish Academy of Sciences, Poland
Pascal Wiggers, Amsterdam University of Applied Sciences, Netherlands
Marcin Wolinski, Polish Academy of Sciences, Poland
Alina Wroblewska, Polish Academy of Sciences, Poland
Victor Zakharov, Saint Petersburg State University, Russia
Jerneja Zganec Gros, Alpineon, Slovenia


FORMAT OF THE CONFERENCE

The conference programme will include invited keynote speeches given by
respected influential researchers/academics, presentations of accepted
papers in both oral and poster/demonstration form, and interesting social
events. The papers will be presented in plenary and topic-oriented
sessions.

Social events including an excursion to the world-famous Pilsner Urquell
Brewery and a trip in the vicinity of Plzen will allow additional informal
interactions of the conference participants.


SUBMISSION OF PAPERS

Authors are invited to submit a full paper not exceeding 12 pages (in
total, i.e. with all figures, bibliography, etc. included) formatted in the
LNAI/LNCS style. Those accepted will be presented either orally or as
posters. The decision about the presentation format will be based on the
recommendation of the reviewers. Each paper is examined by at least
3 reviewers and the process is double blind.

The authors are asked to submit their papers using the on-line submission
interface accessible from the TSD 2023 web application at
https://www.kiv.zcu.cz/tsd2023/index.php?form=mypapers

The papers submitted to the TSD 2023 must not be under review at any other
conference or other type of publication during the TSD 2023 review cycle,
and must not be previously published or accepted for publication elsewhere.

Authors are also invited to present actual projects, developed software or
interesting material relevant to the topics of the conference. The
presenters of demonstrations should provide an abstract not exceeding one
page. The demonstration abstracts will not appear in the conference
proceedings.


OFFICIAL LANGUAGE

The official language of the conference is English.


ACCOMMODATION

The organizing committee arranged discounted accommodation of appropriate
standards at the conference venue. Details about the conference
accomodation will be available on the TSD 2023 web page at
https://www.kiv.zcu.cz/tsd2023/index.php?page=accommodation

The prices of the accommodation (and limited-budget options) will be
available on the conference website, too.


ADDRESS

All correspondence regarding the conference should be addressed to

    TSD 2023 - KIV
    Faculty of Applied Sciences, University of West Bohemia
    Univerzitni 8, 306 14 Plzen, Czech Republic
    Phone: +420 730 851 103
    Fax: +420 377 632 402 (mark the material with letters 'TSD')
    E-mail: tsd2023@tsdconference.org

The e-mail and the conference phone is looked after by the TSD 2023
conference secretary Ms Marluce Quaresma (speaks English, Portuguese, and
Czech).

The official TSD 2023 homepage is: http://www.tsdconference.org/


LOCATION

The city of Plzen (or Pilsen in Germanic languages) is situated in the
heart of West Bohemia at the confluence of four rivers: Uhlava, Uslava,
Radbuza, and Mze. With its approx. 171,000 inhabitants it is the fourth
largest city in the Czech Republic and an important industrial, commercial,
and administrative centre. It is also the capital of the Pilsen Region. In
addition, it has been elected the European Capital of Culture for 2015 by
the Council of the European Union.

The city of Plzen has a convenient location in the centre of West Bohemia.
The place lied on the crossroads of important medieval trade routes and
nowadays it naturally forms an important highway and railroad junction;
thus, it is easily accessible using both individual and public means of
transport.

Plzen lies 85 km (53 mi) south-westwards from the Czech capital Prague, 222
km (138 mi) from the Bavarian capital Munich, 148 km (92 mi) from the Saxon
capital Dresden, and 174 km (108 mi) from the Upper Austrian capital Linz.

The closest international airport is the Vaclav Havel Airport Prague, which
is 75 km (47 mi) away and one can get from there to Plzen very easily
within about two hours by Prague public transport and a train/bus. 

 

Back  Top

3-3-30(2023-09-10) Cfp Affective Computing and Intelligent Interaction (ACII) Conference 2023, Cambridge, MA, USA


 

The Association for the Advancement of Affective Computing (AAAC) invites you to join us at our 11th International Conference on Affective Computing and Intelligent Interaction (ACII), which will be held in Cambridge, Massachusetts, USA,on September 10th – 13th, 2023. 

The Conference series on Affective Computing and Intelligent Interaction is the premier international venue for interdisciplinary research on the design of systems that can recognize, interpret, and simulate human emotions and, more generally, affective phenomena. All accepted papers are expected to be included in IEEE Xplore (conditional on the approval by IEEE Computer Society) and indexed by EI. A selection of the best articles at ACII 2023 will be invited to submit extended versions to the IEEE Transactions on Affective Computing.

The theme of ACII 2023 is “Affective Computing: Context and Multimodality”. Fully understanding, predicting, and generating affective processes undoubtedly requires the careful integration of multiple contextual factors (e.g., gender, personality, relationships, goals, environment, situation, and culture), information modalities (e.g., audio, images, text, touch, and smells) and evaluation in ecological environments. Thus, ACII 2023 especially welcomes submitted research that assesses and advances Affective Computing’s ability to do this integration.

Topics of interest include, but are not limited to:

Recognition and Synthesis of Human Affect from ALL Modalities

  • Multimodal Modeling of Cognitive and Affective States

  • Contextualized Modeling of Cognitive and Affective States

  • Facial and Body Gesture Recognition, Modeling and Animation
  • Affective Speech Analysis, Recognition and Synthesis
  • Recognition and Synthesis of Auditory Affect Bursts (Laughter, Cries, etc.)
  • Motion Capture for Affect Recognition
  • Affect Recognition from Alternative Modalities (Physiology, Brain Waves, etc.)
  • Affective Text Processing and Sentiment Analysis
  • Multimodal Data Fusion for Affect Recognition
  • Synthesis of Multimodal Affective Behavior
  • Summarisation of Affective Behavior


Affective Science using Affective Computing Tools

  • Studies of affective behavior perception using computational tools

  • Studies of affective behavior production using computational tools

  • Studies of affect in medical/clinical settings using computational tools

  • Studies of affect in context using computational tools


Psychology & Cognition of Affect in Designing Computational Systems

  • Computational Models of Affective Processes

  • Issues in Psychology & Cognition of Affect in Affective Computing Systems

  • Cultural Differences in Affective Design and Interaction 

 

Affective Interfaces

  • Interfaces for Monitoring and Improving Mental and Physical Well-Being

  • Design of Affective Loop and Affective Dialogue Systems

  • Human-Centred Human-Behaviour-Adaptive Interfaces

  • Interfaces for Attentive & Intelligent Environments

  • Mobile, Tangible and Virtual/Augmented Multimodal Proactive Interfaces

  • Distributed/Collaborative Multimodal Proactive Interfaces

  • Tools and System Design Issues for Building Affective and Proactive Interfaces

  • Evaluation of Affective, Behavioural, and Proactive Interfaces

 Affective, Social and Inclusive Robotics and Virtual Agents

  • Artificial Agents for Supporting Mental and Physical Well-Being
  • Emotion in Robot and Virtual Agent Cognition and Action
  • Embodied Emotion
  • Biologically-Inspired Architectures for Affective and Social Robotics
  • Developmental and Evolutionary Models for Affective and Social Robotics
  • Models of Emotion for Embodied Conversational Agents
  • Personality in Embodied Conversational Agents
  • Memory, Reasoning, and Learning in Affective Conversational Agents


Affect and Group Emotions

  • Analyzing and modeling groups taking into account emergent states and/or emotions

  • Integration of artificial agents (robots, virtual characters) in the group life by leveraging its affective loop: interaction paradigms, strategies, modalities, adaptation

  • Collaborative affective interfaces (e.g., for inclusion, for education, for games and entertainment)

Open Resources for Affective Computing

  • Shared Datasets for Affective Computing

  • Benchmarks for Affective Computing

  • Open-source Software/Tools for Affective Computing

 

Fairness, Accountability, Privacy, Transparency and Ethics in Affective Computing   

  • Bias, imbalance and inequalities in data and modeling approaches in the context of Affective Computing

  • Bias mitigation in the context of Affective Computing

  • Explainability and Transparency in the context of Affective Computing

  • Privacy-preserving affect sensing and modeling

  • Ethical aspects in the context of Affective Computing


Applications

  • Health and well-being
  • Education
  • Entertainment
  • Consumer Products
  • User Experience

Important dates

Main track submissions: 14 April 2023

Decision notification to authors: 2 June 2023

Camera ready submission for main track: 16 June 2023

 

The remaining important dates can be found at the ACII website.

 

We hope to see you at ACII 2023!

ACII2023 Organizers

AFFECTIVE COMPUTING & INTELLIGENT INTERACTION

Back  Top

3-3-31(2023-09-11) 24th Annual Meeting of SIGDIAL/INLG, Prague, Czech Republic

The 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDIAL) and the 16th International Natural Language Generation Conference (INLG) will be held jointly in Prague on September 11-15, 2023.

 

The SIGDIAL venue provides a regular forum for the presentation of cutting edge research in dialogue and discourse to both academic and industry researchers, continuing a series of 23 successful previous meetings. The conference is sponsored by the SIGDIAL organization - the Special Interest Group in discourse and dialogue for ACL and ISCA.

 

Topics of Interest

 

We welcome formal, corpus-based, implementation, experimental, or analytical work on discourse and dialogue including, but not restricted to, the following themes:

 

  *   Discourse Processing: Rhetorical and coherence relations, discourse parsing and discourse connectives. Reference resolution. Event representation and causality in narrative. Argument mining. Quality and style in text. Cross-lingual discourse analysis. Discourse issues in applications such as machine translation, text summarization, essay grading, question answering and information retrieval. Discourse issues in text generated by large language models.

  *   Dialogue Systems: Task oriented and open domain spoken, multi-modal, embedded, situated, and text-based dialogue systems, their components, evaluation and applications. Knowledge representation and extraction for dialogue. State representation, tracking and policy learning. Social and emotional intelligence. Dialogue issues in virtual reality and human-robot interaction. Entrainment, alignment and priming. Generation for dialogue. Style, voice, and personality. Safety and ethics issues in Dialogue.

  *   Corpora, Tools and Methodology: Corpus-based and experimental work on discourse and dialogue, including supporting topics such as annotation tools and schemes, crowdsourcing, evaluation methodology and corpora.

  *   Pragmatic and Semantic Modeling: Pragmatics and semantics of conversations(i.e., beyond a single sentence), e.g., rational speech act, conversation acts, intentions, conversational implicature, presuppositions.

  *   Applications of Dialogue and Discourse Processing Technology.

 

Submissions

 

The program committee welcomes the submission of long papers, short papers, and demo descriptions. Submitted long papers may be accepted  for oral or for poster presentation. Accepted short papers will be presented as posters.



  *   Long paper submissions must describe substantial, original, completed and unpublished work. Wherever appropriate, concrete evaluation and analysis should be included. Long papers must be no longer than 8 pages, including title, text, figures and tables. An unlimited number of pages is allowed for references. Two additional pages are allowed for appendices containing sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  *   Short paper submissions must describe original and unpublished work. Please note that a short paper is not a shortened long paper. Instead short papers should have a point that can be made in a few pages, such as a small, focused contribution; a negative result; or an interesting application nugget. Short papers should be no longer than 4 pages including title, text, figures and tables. An unlimited number of pages is allowed for references. One additional page is allowed for sample discourses/dialogues and algorithms, and an extra page is allowed in the final version to address reviewers’ comments.

  *   Demo descriptions should be no longer than 4 pages including title, text, examples, figures, tables and references. A separate one-page document should be provided to the program co-chairs for demo descriptions, specifying furniture and equipment needed for the demo.

 

Authors are encouraged to also submit additional accompanying materials, such as corpora (or corpus examples), demo code, videos and sound files.

 

Multiple Submissions

 

SIGDIAL 2023 cannot accept work for publication or presentation that will be (or has been) published elsewhere and that have been or will be submitted to other meetings or publications whose review periods overlap with that of SIGDIAL. Overlap with the SIGDIAL workshop submissions is permitted for non-archived workshop proceedings. Any questions regarding submissions can be sent to program-chairs [at] sigdial.org.

 

Blind Review

 

Building on previous years’ move to anonymous long and short paper submissions, SIGDIAL  2023 will follow the ACL policies for preserving the integrity of double blind review (see author guidelines). Unlike long and short papers, demo descriptions will not be anonymous. Demo descriptions should include the authors’ names and affiliations, and self-references are allowed.

 

Submission Format

 

All long, short, and demonstration submissions must follow the two-column ACL format, which are available as an Overleaf template and also downloadable directly (Latex and Word)

 

Submissions must conform to the official ACL style guidelines, which are contained in these templates. Submissions must be electronic, in PDF format.

 

Submission Deadline

 

SIGDIAL will accept regular submissions through the Softconf/START system, as well as commitment of already reviewed papers through the ACL Rolling Review (ARR) system.

 

Regular submission

 

Authors have to fill in the submission form in the Softconf/START system and upload an initial pdf of their papers before May 15, 2023 (23:59 GMT-11).  Details and the submission link will be posted on the conference website.

 

Submission via ACL Rolling Review (ARR)

 

Please refer to the ARR Call for Papers for detailed information about submission guidelines to ARR. The commitment deadline for authors to submit their reviewed papers, reviews, and meta-review to SIGDIAL 2023 is June 19, 2023. Note that the paper needs to be fully reviewed by ARR in order to make a commitment, thus the latest date for ARR submission will be April 15, 2023.

 

Mentoring

 

Acceptable submissions that require language (English) or organizational assistance will be flagged for mentoring, and accepted with a recommendation to revise with the help of a mentor. An experienced mentor who has previously published in the SIGDIAL venue will then help the authors of these flagged papers prepare their submissions for publication.

 

Best Paper Awards

 

In order to recognize significant advancements in dialogue/discourse science and technology, SIGDIAL 2023 will include best paper awards. All papers at the conference are eligible for the best paper awards. A selection committee consisting of prominent researchers in the fields of interest will select the recipients of the awards.




SIGDIAL 2023 Program Committee

Svetlana Stoyanchev and Shafiq Rayhan Joty

Conference Website: https://2023.sigdial.org/

Back  Top

3-3-32(2023-09-11) Call for Workshops - Affective Computing and Intelligent Interaction (ACII) 2023, MIT MediaLab, Cambridge, MA, USA

 

The organizing committee of Affective Computing and Intelligent Interaction (ACII) 2023 is now inviting proposals for workshops and challenges. The biennial conference is the flagship conference for research in Affective Computing, covering topics related to the study of intelligent systems that read, express, or otherwise use emotion.

Workshops at ACII allow a group of scientists an opportunity to get together to network and discuss a specific topic in detail. Examples of past workshops include: Affective Computing and Intelligent Interaction, Applied Multimodal Affect Recognition, Functions of Emotions for Socially Interactive Agents, Emotions in Games, Affective Brain-Computer Interfaces, Affective Touch, Group Emotions, and Affective Computing for Affective Disorders. We want to encourage workshop proposals that draw together interdisciplinary perspectives on topics in affective computing. We also welcome Challenge-type workshops, where workshop participants would work on a shared task. This year, given our location in Boston and proximity to leading medical institutions, we particularly invite workshops that touch on health and wellness, spanning theoretical topics on affect in mental health to fielded medical applications of affective computing.  

Workshops should focus on a central question or topic. Workshop organizers will be responsible for soliciting and reviewing papers, and putting together an exciting schedule, including time for networking and discussion. Workshop organizers are also expected to present a short summary of the workshop during the main conference.

Example workshops from ACII2022 are available at: https://acii-conf.net/2022/workshops/ 
The workshop proposals website: https://acii-conf.net/2023/calls/workshops/ 
ACII 2023 website: https://acii-conf.net/2023/  

What’s next?
Send your workshop proposal to both workshop chairs.  Please include the following (max three pages):  
  1. Title.
  2. Organizers and affiliations, and Workshop contact person
  3. Extended abstract making the scientific case for the workshop (why, why now, why at ACII, expected outcomes, impact)
  4. Advertisement (e.g. lists, conferences etc., and website hosting (where)).
  5. List of tentative and confirmed PC members (mention this status per PC member)
  6. Expected number of submissions, planned acceptance rate, and paper length, review process.
  7. Tentative/confirmed keynote speaker(s).
  8. Length of the workshop (day or half-day).
  9. List of related and previous workshops/conferences
  10. Your publication plan (e.g., Special Issue, whether contact was made with the publisher already
Process
Proposals will be reviewed in a confidential manner and acceptance will be decided by the ACII 2023 Workshop Chairs and ACII 2023 Senior Program Committee. Decisions about acceptance are final.

Important dates
February 17, 2023: Workshop proposal submission deadline.
Refer to https://acii-conf.net/2023/important-dates/ for other dates.

Workshop Chairs
Timothy Bickmore, Northeastern University, t.bickmore@northeastern.edu
Nutchanon Yongsatianchot, Northeastern University, n.yongsatianchot@northeastern.edu
Back  Top

3-3-33(2023-09-11) Cf Workshops and Tutorials/24th Annual Meeting of SIGDIAL/INLG, Prague, Czech Republic

The 24th Annual Meeting of the Special Interest Group on Discourse and Dialogue (SIGDial 2023) and the 16th International Natural Language Generation Conference (INLG 2023) will be held jointly in Prague on September 11-15, 2023. We now welcome the submission of workshop and tutorial proposals, which will take place on September 11 and 12 before the main conference. 


We encourage submissions of proposals on any topic of interest to the discourse, dialogue, and natural language generation communities. This program is intended to offer new perspectives and bring together researchers working on related topics. We especially encourage the sessions that would bring together researchers from SIGDial and INLG communities.  


Topics of interest include all aspects related to Dialogue, Discourse and Generation including (but not limited to) annotation and resources, evaluation, large language models, adversarial and RL methods, explainable/ethical AI, summarization, interactive/multimodal/situated/incremental systems, data/knowledge/vision-to-text, and applications of dialogue and NLG.


The proposed workshops/tutorials may include a poster session, a panel session, an oral presentation session, a hackathon, a generation/dialogue challenge, or a combination of the above. Workshop organizers will be responsible for soliciting, reviewing, and selecting papers or abstracts. The workshop papers will be published in a separate proceedings. Workshops may, at the discretion of the SIGDial/INLG organizers, be held as parallel sessions. 


Submissions


Workshop and Tutorial proposals should be 2-4 pages containing:  title, type (workshop or tutorial), a summary of the topic, motivating theoretical interest and/or application context; a list of organizers and sponsors; duration (half-day or full-day), and a requested session format(s): poster/panel/oral/hackathon session. Please include the number of expected attendees. The workshop proposals will be reviewed jointly by the general chair and program co-chairs.


Links


Those wishing to propose a workshop or tutorial may want to look at some of the sessions organized at recent SIGDial meetings:


Natural Language in Human Robot Interaction (NLiHRI 2022) 

NLG4health 2022

SummDial 2021

SafeConvAI 2021 

RoboDIAL 2022

BigScience Workshop: LLMs 2021

Interactive Natural Language Technology for Explainable Artificial Intelligence 2019 

https://www.inlg2019.com/workshop

https://www.sigdial.org/files/workshops/conference18/sessions.htm

https://inlg2018.uvt.nl/workshops/



Important Dates


Mar 24, 2023: Workshop/Tutorial Proposal Submission Deadline

April 14, 2023: Workshops/Tutorials Notifications


The  proposals should be sent to conference@sigdial.org
Back  Top

3-3-34(2023-09-19) CfP ACM IVA 2023 @ Würzburg, Germany.

CALL FOR PAPERS  --  ACM IVA 2023

 

The annual ACM Conference on Intelligent Virtual Agents (IVA) is the premier international event for interdisciplinary research on the development, application, and evaluation of Intelligent Virtual Agents with a focus on the ability for social interaction, communication or cooperation. Such artificial agents can be embodied graphically (e.g. virtual characters, embodied conversational agents) or physically (e.g. social or collaborative robots). They are capable of real-time perception, cognition, emotion and action allowing them to participate in dynamic social environments. This includes human-like interaction qualities such as multimodal communication using facial expressions, speech, and gesture, conversational interaction, socially assistive and affective interaction, interactive task-oriented cooperation, or social behaviour simulation. IVAs are highly relevant and widely applied in many important domains including health, tutoring, training, games, or assisted living.

 

We invite submissions of research on a broad range of topics, including but not limited to: theoretical foundations of intelligent virtual agents, agent and interactive behaviour modelling, evaluation, agents in simulations, games, and other applications. Please see the detailed list of topics below.

 

VENUE

======

IVA 2023 will take place in Würzburg, Germany. Würzburg is a vibrant town located by the river Main in northern Bavaria, between Frankfurt and Nuremberg. The mix of stunning historical architecture and the young population is what makes the atmosphere so unique, including 35,000 students from three different universities. The mild and sunny climate is ideal to enjoy the many activities Würzburg has to offer: visiting a beer garden next to the river, attending a sporting or cultural event or taking a stroll through one of the parks.

 

IVA is targeted to be an in-person conference. In case of extraordinary circumstances, such as visa problems or health issues, video presentation will be possible. However, there is no digital or hybrid conference system planned, thus it is not possible to attend this year’s IVA conference remotely.

 

IMPORTANT DATES

===================

Abstract submission: April 14, 2023

Paper submission: April 18, 2023

Review notification / start of rebuttal: May 31, 2023

Notification of acceptance: June 23, 2023

Camera ready deadline: July 18, 2023

Conference: September 19-22, 2023

 

All deadlines are anywhere on earth (UTC−12).

 

SPECIAL TOPIC

===================

This year’s conference will highlight a special topic on “IVAs in future mixed realities”, e.g., in social VR and potential incarnations of a Metaverse. Immersive and potentially distributed artificial virtual worlds provide new forms of full-size embodied human-human interaction via avatars of arbitrary looks, enabling interesting intra- and interpersonal effects. They also enable hybrid avatar-agent interactions between humans and A.I.s, unlocking the full potential of non-verbal behavior in digital face-to-face encounters, significantly enhancing the design space for IVAs to assist, guide, help but also to persuade and affect interacting users. We specifically welcome all kinds of novel research on technological, psychological, and sociological determinants of such immersive digital avatar-agent encounters.

 

TYPES OF SUBMISSION

===================

- Full Papers (7 pages + 1 additional page for references):

Full papers should present significant, novel, and substantial work of high quality.

 

- Extended Abstracts (2 pages + 1 additional page for references)

Extended abstracts may contain early results and work in progress.

 

- Demos (2 pages + 1 additional page for references +1 one page with demo requirements)

Demos submissions focus on implemented systems and should contain a link to a video of the system with a maximum length of 5 minutes.

 

All submissions will be double-blind peer-reviewed by a group of external expert reviewers. All accepted submissions will be published in the ACM proceedings.

 

Accepted full papers will be presented either in oral sessions or as posters during the conference (depending on the nature of the contribution), extended abstracts will be presented as posters, and demos will be showcased in dedicated sessions during the conference. For each accepted contribution, at least one of the authors must register for the conference.

 

IVA 2023 will also feature workshops and a doctoral consortium. Please visit the website (https://iva.acm.org/2023) for more details and updates.

 

TRACKS

===================

For full paper submissions, IVA will have different paper tracks with different review criteria. Authors need to indicate which one of the following tracks they want to submit their paper to:

 

1. Empirical Studies

  • criteria: methodology, theoretical foundation, originality of result etc.

2. Computational Models and Methods

  • criteria: technical soundness, novelty of the model or approach, proof of concept, etc.

3. Operational Systems and Applications

  • criteria: innovation of the application, societal relevance, evaluation of effects, etc.

 

 

SCOPE AND LIST OF TOPICS

========================

IVA invites submissions on a broad range of topics, including but not limited to:

 

AGENT DESIGN AND MODELING:

- Cognition (e.g. task, social, other)

- Emotion, personality and cultural differences

- Socially communicative behaviour (e.g., of emotions, personality, relationship)

- Conversational and dialog behavior

- Social perception and understanding of other’s states or traits

- Machine learning approaches to agent modeling

- Adaptive behavior and interaction dynamics

- Models informed by theoretical and empirical research from psychology

 

MULTIMODAL INTERACTION:

- Verbal and nonverbal behavior coordination (synthesis)

- Multimodal/social behavior processing

- Face-to-face communication skills

- Interaction qualities  engagement, rapport, etc.)

- Managing co-presence and interpersonal relation

- Multi-party interaction

- Data-driven modeling

 

SOCIALLY INTERACTIVE AGENT ARCHITECTURES:

- Design criteria and design methodologies

- Engineering of real-time human-agent interaction

- Standards / measures to support interoperability

- Portability and reuse

- Specialized tools, toolkits, and toolchains

 

EVALUATION METHODS AND EMPIRICAL STUDIES:

- Evaluation methodologies and user studies

- Metrics and measures

- Ethical considerations and societal impact

- Applicable lessons across fields (e.g. between robotics and virtual agents)

- Social agents as a means to study and model human behavior

 

APPLICATIONS:

- Applications in education, skills training, health, counseling, games, art, etc.

- Virtual agents in games and simulations

- Social agents as tools in psychology, neuroscience, social simulation, etc

- Migration between platforms

 

INSTRUCTIONS FOR AUTHORS

=========================

Paper submissions should be anonymous and prepared in the “ACM Standard” format, more specifically the “SigConf” format. Please consulthttps://www.acm.org/publications/proceedings-template for the Latex template, word interim template, or connection to the overleaf platform.

All papers need to be submitted in PDF-format.

 

By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.

 

Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper.  ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors.  The collection process has started and will roll out as a requirement throughout 2022.  We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.

 

The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the conference. The official publication date affects the deadline for any patent filings related to published work.

 

This event is sponsored by SIGAI.

 

Please visit the conference website for detailed information on how to submit your paper.

 

CONFERENCE WEBSITE

===================

https://iva.acm.org/2023/

 

Back  Top

3-3-35(2023-09-20) CBMI 2023, Orléans, France

Call for SS  Proposals at CBMI’2023==================

CBMI’2023  http://cbmi2023.org/ is calling for high quality Special Sessions  addressing innovative research in content – based multimedia indexing  and its related broad fields. The main scope of the conference is in  analysis and understanding of multimedia contents including

  • Multimedia information retrieval (image, audio, video, text)
  • Mobile media retrieval
  • Event-based media retrieval
  • Affective/emotional interaction or interfaces for multimedia retrieval
  • Multimedia data mining and analytics
  • Multimedia retrieval for multimodal analytics and visualization
  • Multimedia recommendation
  • Multimedia verification (e.g., multimodal fact-checking, deep fake analysis)
  • Large-scale multimedia database management
  • Summarization, browsing, and organization of multimedia content
  • Evaluation and benchmarking of multimedia retrieval systems
  • Explanations of decisions of AI-in Multimedia
  • Application domains : health, sustainable cities, ecology, culture… 

and all this in the era of Artificial Intelligence for analysis and indexing of multimedia and multimodal information.

A special oral session will contain oral presentations of long research papers, short papers will be presented as posters during poster sessions with special mention of an SS.

 

-        Long research papers should present complete work with evaluations on topics related to the Conference.

-        Short research papers should present preliminary results or more focused contributions.

-         

An SS proposal has to contain

-        Name, title, affiliation and a short bio of SS chairs;

-        The rational ;

-         A  list of at least 5 potential contributions with a provisional title, authors and affiliation.

 

 

The dead line for SS proposals is coming:  23rd of January

 

Please submit your proposals to the SS chairs

jenny.benois-pineau@u-bordeaux.fr

mourad.oussalah@oulu.fi

adel.hafiane@insa-cvl.fr

Back  Top

3-3-36(2023-10-09) 25th ACM International Conference on Multimodal Interaction (ICMI 2023), Paris, France

25th ACM International Conference on Multimodal Interaction (ICMI 2023)

9-13 October 2023, Paris, France

 

The 25th International Conference on Multimodal Interaction (ICMI 2023) will be held in Paris, France. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling such as representations, fusion, data and systems. The study of social interactions englobes both human-human interactions and human-computer interactions. A unique aspect of ICMI is its multidisciplinary nature which values both scientific discoveries and technical modeling achievements, with an eye towards impactful applications for the good of people and society.

 

ICMI 2023 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits, doctoral consortium, and late-breaking papers. The conference will also feature tutorials, workshops and grand challenges. The proceedings of all ICMI 2023 papers, including Long and Short Papers, will be published by ACM as part of their series of International Conference Proceedings and Digital Library, and the adjunct proceedings will feature the workshop papers.

 

Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2023 will need to be novel along one of the two dimensions:

  • Scientific Novelty: Papers should bring new scientific knowledge about human social interactions, including human-computer interactions. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
  • Technical Novelty: Papers should propose novelty in their computational approach for recognizing, generating or modeling multimodal data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach.

 

Please see the Submission Guidelines for Authors https://icmi.acm.org/ for detailed submission instructions. Commitment to ethical conduct is required and submissions must adhere to ethical standards in particular when human-derived data are employed. Authors are encouraged to read the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).

 

ICMI 2023 conference theme: The theme for this year’s conference is “Science of Multimodal Interactions”. As the community grows, it is important to understand the main scientific pillars involved in deep understanding of multimodal social interactions. As a first step, we want to acknowledge key discoveries and contributions that the ICMI community enabled over the past 20+ years. As a second step, we reflect on the core principles, foundational methodologies and scientific knowledge involved in studying and modeling multimodal interactions. This will help establish a distinctive research identity for the ICMI community while at the same time embracing its multidisciplinary collaborative nature. This research identity and long-term agenda will enable the community to develop future technologies and applications while maintaining commitment to world-class scientific research.

Additional topics of interest include but are not limited to:

  • Affective computing and interaction
  • Cognitive modeling and multimodal interaction
  • Gesture, touch and haptics
  • Healthcare, assistive technologies
  • Human communication dynamics
  • Human-robot/agent multimodal interaction
  • Human-centered A.I. and ethics
  • Interaction with smart environment
  • Machine learning for multimodal interaction
  • Mobile multimodal systems
  • Multimodal behaviour generation
  • Multimodal datasets and validation
  • Multimodal dialogue modeling
  • Multimodal fusion and representation
  • Multimodal interactive applications
  • Novel multimodal datasets
  • Speech behaviours in social interaction
  • System components and multimodal platforms
  • Visual behaviours in social interaction
  • Virtual/augmented reality and multimodal interaction

 

Important Dates

Paper Submission: May 1, 2023 

Rebuttal period: June 26-29, 2023

Paper notification: July 21, 2023

Camera-ready paper: August 14, 2023

Presenting at main conference: October 9-13, 2023

 

Back  Top

3-3-37(2023-10-09) ACM ICMI 2023, Paris, France
=================================
ACM ICMI 2023 2ND CALL FOR PAPERS
=================================
9-13 October 2023, Paris - France
https://icmi.acm.org/2023/
=================================
 
The 25th International Conference on Multimodal Interaction (ICMI 2023) will be held in Paris, France. ICMI is the premier international forum for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. The conference focuses on theoretical and empirical foundations, component technologies, and combined multimodal processing techniques that define the field of multimodal interaction analysis, interface design, and system development.
 
This year, the conference is pleased to welcome 3 keynote speakers:
 
* Maja Mataric (Interaction Lab, University of Southern California)
* Sophie Scott (Institute of Cognitive Neuroscience, University College London)
* Simone Natale (University of Turin)
 
We are keen to showcase novel input and output modalities and interactions to the ICMI community. ICMI 2023 will feature a single-track main conference which includes: keynote speakers, technical full and short papers (including oral and poster presentations), demonstrations, exhibits, doctoral spotlight papers, and late-breaking papers. The conference will also feature workshops and grand challenges. The proceedings of all ICMI 2023 papers, including Long and Short Papers, will be published by ACM as part of their series of International Conference Proceedings and Digital Library, and the adjunct proceedings will feature the workshop papers.
 
Novelty will be assessed along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2023 will need to be novel along one of the two dimensions. In other words, a paper which is strong on scientific knowledge contribution but low on algorithmic novelty should be ranked similarly to a paper that is high on algorithmic novelty but low on knowledge discovery.
 
* Scientific Novelty: Papers should bring some new knowledge to the scientific community. For example, discovering new behavioral markers that are predictive of mental health or how new behavioral patterns relate to children’s interactions during learning. It is the responsibility of the authors to perform a proper literature review and clearly discuss the novelty in the scientific discoveries made in their paper.
 
* Technical Novelty: Papers reviewed with this sub-criterion should include novelty in their computational approach for recognizing, generating or modeling data. Examples include: novelty in the learning and prediction algorithms, in the neural architecture, or in the data representation. Novelty can also be associated with new usages of an existing approach.
 
Please see the Submission Guidelines for Authors https://icmi.acm.org/2023/guidelines-for-authors/ for detailed submission instructions.
 
This year’s conference theme: The theme for this year’s conference is “Science of Multimodal Interactions”. As the community grows, it is important to understand the main scientific pillars involved in deep understanding of multimodal and social interactions. As a first step, we want to acknowledge key discoveries and contributions that the ICMI community enabled over the past 20+ years. As a second step, we reflect on the core principles, foundational methodologies and scientific knowledge involved in studying and modeling multimodal interactions. This will help establish a distinctive research identity for the ICMI community while at the same time embracing its multidisciplinary collaborative nature. This research identity and long-term agenda will enable the community to develop future technologies and applications while maintaining commitment to world-class scientific research.
 
Additional topics of interest include but are not limited to:
 
* Affective computing and interaction
* Cognitive modeling and multimodal interaction
* Gesture, touch and haptics
* Healthcare, assistive technologies
* Human communication dynamics
* Human-robot/agent multimodal interaction
* Human-centered A.I. and ethics
* Interaction with smart environment
* Machine learning for multimodal interaction
* Mobile multimodal systems
* Multimodal behaviour generation
* Multimodal datasets and validation
* Multimodal dialogue modeling
* Multimodal fusion and representation
* Multimodal interactive applications
* Novel multimodal datasets
* Speech behaviours in social interaction
* System components and multimodal platforms
* Visual behaviours in social interaction
* Virtual/augmented reality and multimodal interaction
 
Submissions:
------------
Please note that the instructions were updated as of March 17th, 2023. All submissions are expected for reviewing in PDF. The submission format is the double column ACM conference format (see https://icmi.acm.org/2023/guidelines-for-authors/) The length depends of the different submission categories:
* Long paper: The maximum length is 8 pages using latex or Word (excluding references).
* Short paper: The maximum length is 4 pages using latex or Word (excluding references).
 
The submissions can be submitted online on: https://new.precisionconference.com/submissions/icmi23a
 
ACM Publication Policies
------------------------
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
 
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and will roll out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
 
Important Dates
---------------
Paper Submission May 1, 2023 
Rebuttal period June 26-29, 2023
Paper notification July 21, 2023
Camera-ready paper August 14, 2023
Presenting at main conference October 9-13, 2023
Back  Top

3-3-38(2023-10-09) Cf Tutorials ICMI 2023, Paris, France
=====================================
ICMI 2023 Call for tutorial proposals
https://icmi.acm.org/2023/call-for-tutorials/
25th ACM International Conference on Multimodal Interaction
9-13 October 2023, Paris, France
=====================================
 
ACM ICMI 2023 seeks half-day (3-4 hours) tutorial proposals addressing current and emerging topics within the scope of 'Science of Multimodal Interactions'. Tutorials are intended to provide a high-quality learning experience to participants with a varied range of backgrounds. It is expected that tutorials are self-contained.
 
Prospective organizers should submit a 4-page (maximum) proposal containing the following information:
 
1. Title
2. Abstract appropriate for possible Web promotion of the Tutorial
3. A short list of the distinctive topics to be addressed
4. Learning objectives (specific and measurable objectives)
5. The targeted audience (student / early stage / advanced researchers, pré-requisite knowledge, field of study)
6. Detailed description of the Tutorial and its relevance to multimodal interaction
7. Outline of the tutorial content with a tentative schedule and its duration
8. Description of the presentation format (number of presenters, interactive sessions, practicals)
9. Accompanying material (repository, references) and equipment, emphasizing any required material from the organization committee (subject to approval)
10. Short biography of the organizers (preferably from multiple institutions) together with their contact information and a list of 1-2 key publications related to the tutorial topic
11. Previous editions: If the tutorial was given before, describe when and where it was given, and if it will be modified for ACM ICMI 2023.
 
Proposals will be evaluated using the following criteria:
 
- Importance of the topic and the relevance to ACM ICMI 2023 and its main theme: 'Science of Multimodal Interactions'
- Presenters' experience
- Adequateness of the presentation format to the topic
- Targeted audience interest and impact
- Accessibility and quality of accompanying materials (open access)
 
Proposals that focus exclusively on the presenters' own work or commercial presentations are not acceptable.
 
Unless explicitly mentioned and agreed by the Tutorial chairs, the tutorial organizers will take care of any specific requirements which are related to the tutorial such as specific handouts, mass storages, rights of distribution (material, handouts, etc.), copyrights, etc.
 
Important Dates and Contact Details
-----------------------------------
 
Tutorial Proposal Deadline: May 15, 2023
Tutorial Acceptance Notification: May 29, 2023
Camera-ready version of the tutorial abstract: June 26, 2023
Tutorial date: TBD (either October 9 or October 13)
 
Proposals should be emailed to the ICMI 2023 Tutorial Chairs, Prof. Hatice Gunes and Dr. Guillaume Chanel: icmi2023-tutorial-chairs@acm.org
 
Prospective organizers are also encouraged to contact the co-chairs if they have any questions.
Back  Top

3-3-39(2023-10-09) CfParticipation GENEA Challenge 2023 on speech-driven gesture generation, Paris, France

Call for participation: GENEA Challenge 2023 on speech-driven gesture generation
Starting date: May 1

Location: Official Grand Challenge of ICMI 2023, Paris, France

Website: https://genea-workshop.github.io/2023/challenge/
*********************************************************************

Overview
*********************

The state of the art in co-speech gesture generation is difficult to assess, since every research group tends to use their own data, embodiment, and evaluation methodology. To better understand and compare methods for gesture generation and evaluation, we are continuing the GENEA (Generation and Evaluation of Non-verbal Behaviour for Embodied Agents) Challenge, wherein different gesture-generation approaches are evaluated side by side in a large user study. This 2023 challenge is a Grand Challenge for ICMI 2023 and is a follow-up to the first and second editions of the GENEA Challenge, arranged in 2020 and 2022.

 

This year the challenge will focus on gesture synthesis in a dyadic setting, i.e., gestures that depend not only on speech, but also on the behaviour of an interlocutor in a conversation. We invite researchers in academia and industry working on any form of corpus-based non-verbal behaviour generation and gesticulation to submit entries to the challenge, whether their method is driven by rule or machine learning. Participants are provided a large, common dataset of speech (audio+aligned text transcriptions) and 3D motion to develop their systems, and then use these systems to generate motion on given test inputs. The generated motion clips are rendered onto a common virtual agent and evaluated for aspects such as motion quality and appropriateness in a large-scale crowdsourced user study.

 

Data

*********************

The 2023 challenge is based on the Talking With Hands 16.2M dataset (https://github.com/facebookresearch/TalkingWithHands32M). The official challenge dataset also includes additional annotations, and is only available to registered participants.

 

Timeline

*********************

April 1  – Participant registration opens

May 1 – Challenge training dataset released to participants

June 7 – Test input released to participants

June 14 – Deadline for participants to submit generated motion

July 3 – Release of crowdsourced evaluation results to participants

July 14 – Paper submission deadline

August 4 – Author notification

August 11 – Camera-ready papers due

October 9 or 13 – Challenge presentations at ICMI

 

If you would like to receive a notification when challenge registration opens, please follow this link: https://forms.gle/MFEXv84xGL3NrY3d9/.

 

Challenge paper

*********************

Challenge participants are required to submit a paper that describes their system and findings, and will present their work at the Grand Challenge session at ICMI. All accepted papers will be part of the ACM ICMI 2023 main proceedings. Papers that are not accepted will have a chance to be considered for the GENEA Workshop 2023, whose papers are published in the ACM ICMI 2023 companion proceedings.

Back  Top

3-3-40(2023-10-09) ICMI'23 CALL FOR MULTIMODAL GRAND CHALLENGES, Paris, France
ICMI'23 CALL FOR MULTIMODAL GRAND CHALLENGES
============================================
9-13 October 2023, Paris - France
============================================
 
Teams are encouraged to submit proposals for one or more ICMI Multimodal Grand Challenges. The International Conference on Multimodal Interaction (ICMI) is the world's leading venue for multidisciplinary research on multimodal human-human and human-computer interaction, interfaces, and system development. Identifying the best algorithms and their failure modes is necessary for developing systems that can reliably interpret human-human communication or respond to human input. The availability of datasets and common goals has led to significant development in domains such as computer vision, speech recognition, computational (para-) linguistics, and physiological signal processing, for example. We invite the ICMI community to propose, define, and address the scientific Grand Challenges in our field during the next five years. The goal of the ICMI Multimodal Grand Challenges is to elicit fresh ideas from the ICMI community and to generate momentum for future collaborative efforts. Challenge tasks involving analysis, synthesis, and interaction are all feasible.
 
We invite organizers from various fields related to multimodal interaction to propose and run Grand Challenge events at ICMI 2023. We are looking for exciting and stimulating challenges including but not limited to the following categories:
 
* Dataset-driven challenge. 
This challenge will provide a dataset that is exemplary of the complexities of current and future multimodal problems, and one or more multimodal tasks whose performance can be objectively measured and compared in rigorous conditions. Participants in the Challenge will evaluate their methods against the challenge data in order to identify areas of strengths and weaknesses.
 
* System-driven challenge.
This challenge will provide an interactive problem system (e.g. dialog-based or non-verbal-based) and the associated resources, which can allow people to participate through the integration of specific modules or alternative full systems. Proposers should also establish systematic evaluation procedures.
 
Prospective organizers should submit a five-page maximum proposal containing the following information:
1.    Title
2.    Abstract appropriate for possible Web promotion of the Challenge
3.    Distinctive topics to be addressed and specific goals
4.    Detailed description of the Challenge and its relevance to multimodal interaction
5.    Length (full day or half day)
6.    Plan for soliciting participation and list of potential participants
7.    Description of how submissions to the challenge will be evaluated, and a list of proposed reviewers
8.    Proposed schedule for releasing datasets (if applicable) and/or systems (if applicable) and receiving submissions.
9.    Short biography of the organizers (preferably from multiple institutions)
10. Funding source (if any) that supports or could support the challenge organization
11. Draft call for papers: affiliations and email address of the organizers; summary of the Grand Challenge; list of potential Technical Program Committee members and their affiliations, important dates
 
Proposals will be evaluated based on originality, ambition, feasibility, and implementation plan. A Challenge with dataset(s) or system(s) that has had pilot results to ensure its representativity and suitability to the proposed task will be given preference for acceptance; an additional 1 page description must be attached in such case. Continuation of or variants on previous ICMI grand challenges are welcome, though we ask for submissions of this form to highlight the number of participants that attended during the previous year and describe what changes (if any) will be made from the previous year.
 
The ICMI conference organizers will offer support with basic logistics, which includes rooms and equipment to run the challenge workshop, coffee breaks synchronized with the main track, etc.
 
Important Dates and Contact Details
===================================
 
Proposals due: February 3, 2023
Proposal notification: February 10, 2023
Paper camera-ready: August 13, 2023
Grand challenge date: October 9 or 14, 2023
 
Proposals should be emailed to the ICMI 2023 Multimodal Grand Challenge Chairs, Sean Andrist and Fabien Ringeval:  icmi2023-grand-challenge-chairs@acm.org
 
Prospective organizers are also encouraged to contact the co-chairs if they have any questions.
Back  Top

3-3-41(2023-10-29) Cf participation : the 2nd Conversational Head Generation Challenge @ ACM Multimedia 2023

Call for Participation: the 2nd Conversational Head Generation Challenge @ ACM Multimedia 2023

We are pleased to invite multimedia researchers to participate the 2nd 'Conversational Head Generation Challenge,' co-located with ACM Multimedia 2023.

About the Challenge:
Conversational head generation highlights both the talking and listening roles generation in an interactive face-to-face conversation. Generating vivid talking head video and proper responsive listening behavior are both essential for digital humans during face-to-face human-computer interaction. More details can be found via: https://vico.solutions/challenge/2023

This distinctive challenge is based on the newly extended ViCo dataset (https://vico.solutions/vico), composed with conversation videos between real humans. Our aim is to bring face-to-face interactive head video generation into a visual competition through this challenge. This year, two tracks will be hosted:
- Talking head video generation (audio-driven speaker video generation) conditioned on the identity and audio signals of the speaker.
- Responsive Listening Head Video Generation (video-driven listener video generation) conditioned on the identity of the listener and with real-time responses to the speaker's behaviors.

As a starting point for the participates, we also provide an open-source baseline method (https://github.com/dc3ea9f/vico_challenge_baseline) that includes audio/video-driven head generation, rendering, and scripts for 13 evaluation metrics.

Important Dates:
- Dataset available for download (training set): March 27th.
- Challenge launch date: April 3rd.
- Paper submissions deadline: July 14th.
- Top submissions will have the opportunity to present their work at the workshop during ACM Multimedia 2023. We encourage all participating teams to submit a paper (up to 4 pages + up to 2 extra pages for references only) briefly describing their solution.

Find out more about the challenge:
- Challenge mainpage (including challenge registration, online evaluation results): https://vico.solutions/challenge/2023
- Challenge page at ACM MM 2023: https://www.acmmm2023.org/grand-challenges-2/

We believe this challenge would greatly benefit from your knowledge and expertise. Please don't hesitate to reach out if you have any questions or require further information.

Contact: Yalong Bai, Mohan Zhou, Wei Zhang
vico-challenge@outlook.com

The organizing team
March 2023

Back  Top

3-3-42(2023-11-29) SPECOM 2023, Hubli-Dharwad, India



Announcing the SPECOM 2023 Call for Papers! 

 

The Call for Papers for SPECOM 2023 is now open! The 25th  International Conference on Speech and Computer (SPECOM) will be held from 29th November- 1st December 2023 in Hubli-Dharwad, India. 

This flagship conference will offer a comprehensive technical program presenting all the latest developments in research and technology for speech processing and its applications. Featuring world-class oral and poster sessions, plenaries and perspective exhibitions, demonstrations, tutorials,  and satellite workshops, it is expected to attract leading researchers and global industry figures, providing a great networking opportunity. Moreover, exceptional papers and contributors will be selected and recognized by SPECOM.

Website Link: https://iitdh.ac.in/specom-2023/

Call for papers PDF is available here.

Special attractions for commemorating Silver Jubilee of SPECOM

  • Students Special Session

  • Special Session on Speech Processing for Under-Resource Languages

  • Special Session on Industrial Speech and Language Technology

  • Satellite Workshop on “Speaker and Language Identification, Verification and Diarization” @ Goa

Technical Scope:


We invite submissions of original unpublished technical papers on topics including but not limited to:


  • Affective computing

  • Audio-visual speech processing

  • Corpus linguistics

  • Computational paralinguistics

  • Deep learning for audio processingVoice

  • Forensic speech investigations

  • Human-machine interaction

  • Language identification

  • Multichannel signal processing

  • Multimedia processing

  • Multimodal analysis and synthesis

  • Sign language processing

  • Speaker recognition

  • Speech and language resources

  • Speech analytics and audio mining

  • Speech and voice disorders

  • Speech-based applications

  • Speech driving systems in robotics

  • Speech enhancement

  • Speech perception

  • Speech recognition and understanding

  • Speech synthesis

  • Speech translation systems

  • Spoken dialogue systems

  • Spoken language processing

  • Text mining and sentiment analysis

  • Virtual and augmented reality

  • Voice assistants



Organizers:


  • General chairs:

    • Prof. Yegnanarayana Bayya (IIIT Hyderabad)

    • Prof. Shyam S Agrawal (KIIT Gurugram)

  • Technical Program Committee Chairs:

    • Prof. Rajesh M. Hegde (IIT Dharwad)

    • Prof. Alexey Karpov (SPC RAS St. Petersburg)

    • Prof. K. Samudravijaya (KL University)

    • Dr. Deepak K. T. (IIIT Dharwad)

  • Organizing Commiittee:

    • Prof. S R M Prasanna (IIT Dharwad)

    • Prof. Suryakanth V gangashetty (KL University)

Important dates:

  • Paper Submission Starts: 15 May 2023

  • Paper Submission Deadline: 31 July 2023

  • Paper Acceptance Notification: 8 September 2023 

  • Camera Ready Paper Deadline: 24 September 2023

  • Early Bird Registration Deadline: 24 September 2023 

  • Author Registration Deadline: 20 March 2023 

  • Conference date: 29 November - 1 December 2023

  • Satellite workshop: 2 December 2023

Back  Top

3-3-43(2023-12-16) The 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2023), Taipeh, Taiwan

The 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2023) will be held on December 16 – 20, 2023, at Taipei, Taiwan. The workshop is held every two years and has a tradition of bringing together researchers from academia and industry in an intimate and collegial setting to discuss problems of common interest in automatic speech recognition and understanding. The conference will be an 'in-person' event (with a virtual component for those that can not attend physically). The event will be held in the Beitou Area, the town of hot springs of Taipei. We encourage all to join us for this wonderful event in Taiwan; looking forward to seeing you all in Taiwan. The paper submission deadline is July 3rd, 2023.
http://www.asru2023.org/

Back  Top

3-3-44(2024-05-13) 13th International Seminar on Speech Production, Autrans, France
13th International Seminar on Speech Production, 13-17 May 2024, in Autrans, France
 
It is time for the next International Seminar on Speech Production.
 
After the launch in 1988 in Grenoble, followed by in Leeds (1990), Old Saybrook (1993), Autrans (1996), Kloster Seeon (2000), Sydney (2003), Ubatatuba (2006), Strasbourg (2008), Montreal (2011), Cologne (2014), Tianjin (2017) and virtually in in 2020, the 13th ISSP will come back (close) to Grenoble.
 
After a very successful virtual ISSP in 2020 (Haskins Labs), we are ready again for an in-person meeting in a very beautiful location in the mountains of Autrans (of course we will provide an option to attend virtually).
Take your calendars and mark the 13-17 May 2024 for the 13th International Seminar on Speech Production co-organized by several laboratories in France
 
More information including the website and important dates will be provided soon.
 
We are looking forward to meeting you in Autrans in 2024!
 
The organizing committee, Cécile Fougeron & Pascal Perrier together with Jalal Al-Tamimi, Pierre Baraduc, Véronique Boulanger, Mélanie Canault, Maëva Garnier, Anne Hermes, Fabrice Hirsch, Leonardo Lancia, Yves Laprie, Yohann Meynadier, Slim Ouni, Rudolph Sock, Béatrice Vaxelaire
 
Follow us on twitter @issp2024!
 

Claire PILLOT-LOISEAU

. Maître de Conférences HDR en Phonétique
. Responsable du DU de Phonétique Appliquée à la Langue Française (DUPALF)

Laboratoire de Phonétique et Phonologie UMR 7018 (LPP)
Université Sorbonne Nouvelle, département Institut de Linguistique et de Phonétique Générales et Appliquées (ILPGA)

. 4, rue des Irlandais, 75005 PARIS (Laboratoire)  
. 8, Avenue de Saint Mandé, 75012, PARIS (Université)
Back  Top

3-3-45(2024-05-20) LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation, Turin, Italy

 LREC-COLING 2024 - The 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation

Lingotto Conference Centre - Turin (Italy)

20-25 May, 2024

 

Conference website:   https://lrec-coling-2024.lrec-conf.org/   

Twitter: @LrecColing2024

 

Two major international key players in the area of computational linguistics, the ELRA Language Resources Association (ELRA) and the International Committee on Computational Linguistics (ICCL), are joining forces to organize the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024) to be held in Turin (Italy) on 20-25 May, 2024.

The hybrid conference will bring together researchers and practitioners in computational linguistics, speech, multimodality, and natural language processing, with special attention to evaluation and the development of resources that support work in these areas.  Following in the tradition of the well-established parent conferences COLING and LREC, the joint conference will feature grand challenges and provide ample opportunity for attendees to exchange information and ideas through both oral presentations and extensive poster sessions, complemented by a friendly social program.

The three-day main conference will be accompanied by a total of three days of workshops and tutorials held in the days immediately before and after.

 

General Chairs

  • Nicoletta Calzolari, CNR-ILC, Pisa
  • Min-Yen Kan, National University of Singapore

 

Advisors to General Chairs

  • Chu-Ren Huang, The Hong Kong Polytechnic University 
  • Joseph Mariani, LISN-CNRS, Paris-Saclay University

 

Programme Chairs

  • Veronique Hoste, Ghent University 
  • Alessandro Lenci, University of Pisa
  • Sakriani Sakti, Japan Advanced Institute of Science and Technology
  • Nianwen Xue, Brandeis University

 

Management Chair

  • Khalid Choukri, ELDA/ELRA, Paris

 

Local Chairs

  • Valerio Basile, University of Turin
  • Cristina Bosco,  University of Turin
  • Viviana Patti, University of Turin 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA