ISCA - International Speech
Communication Association


ISCApad Archive  »  2022  »  ISCApad #295  »  Journals

ISCApad #295

Monday, January 09, 2023 by Chris Wellekens

7 Journals
7-1CfP TAL Journal (Open Access), Special issue on 'Cross/multimodal NLP'

Call for Papers

 

TAL Journal (Open Access), Special issue on 'Cross/multimodal NLP'

 

 

Submission deadline: 31 March 2022

Abstract submission deadline is removed. The only deadline is now the full paper submission deadline.

If possible, the authors are invited to let us know their intent to submit by email to tal-63-2@sciencesconf.org one week before the full paper submission deadline.


Website: https://tal-63-2.sciencesconf.org


Natural language is not limited to the written modality. It includes and interacts with many others. On the one hand, a message can be conveyed through other language modalities, including audio (speech), gestures and facial expressions (sign language or completed speech). It may also be accompanied by social attitudes and non-verbal dimensions, including signs of affect, spontaneity, pathology, co-adaptation with dialogue participants, etc. Natural language processing (NLP) is thus a joint processing of multiple information channels. On the other hand, natural language is often used to describe concepts and denote entities that are essentially multimodal (description of an image, an event, etc.). Many problems then require bridges between different modalities.


The objective of this special issue of the journal TAL is to promote NLP in multimodal contexts (several modalities contribute to the resolution of a problem) or inter-modal contexts (passage from one modality to another). Thus, the contributions expected for this special issue are, among others (but not exclusively), in the following fields of application:

  • multimodal dialogue, multimodal question-answering ;
  • sign language, completed spoken language ;
  • speech processing, automatic speech recognition, speech synthesis in multimodal contexts;
  • synthesis of animated emotional agents;
  • handwriting recognition and analysis of handwritten documents;
  • understanding, translation and summarisation of multimodal documents;
  • indexing, search and mining of multimedia and/or multimodal documents;
  • biological signal processing, computational psychology or sociology, for NLP;
  • inter-/multimodal human-computer interface for NLP;
  • other multimodal or inter-modal applications (automatic image captioning, image-to-text generation, generation/analysis of songs and lyrics, etc.).
In the face of the predominance of work on written language processing and the historical compartmentalisation of communities specific to each modality (image processing, signal processing, neuroscience, etc.), authors are encouraged to highlight the specificities (benefits, difficulties, perspectives, etc.) linked to inter- or multimodality in their work, for instance concerning:
  • understanding the interactions between modalities;
  • harmonisation or compatibility of representations;
  • the development of joint models or transfer from one modality to another;
  • the constitution (or even annotation) of multimodal resources;
  • etc.

IMPORTANT DATES

  • Full paper submission deadline: 31 March 2022
  • Notification to authors, first review: 15 June 2022
  • Notification to authors, second review: 15 September 2022
  • Publication: November 2022

LANGUAGE

Manuscripts may be submitted in English or FrenchIf all authors are French speakers, they are requested to submit their contributions in French.



FORMAT

Papers must be between 20 and 25 pages, references and appendices included (no length exemptions are possible). Authors who intend to submit a paper are encouraged to click the menu item 'Paper submission' (PDF format).

 

To do so, they will need to have an account, or create it, on the sciencesconf platform (go to http://www.sciencesconf.org and click on 'create account' next to the 'Connect' button at the top of the page). To submit, come back to the page https://tal-63-2.sciencesconf.org/ , connect to the account and upload the submission.

 

From now on, TAL will perform double-blind review: it is thus necessary to anonymize the manuscript and the name of the pdf file.

 

Style sheets are available for download on the Web site of the journal (https://www.atala.org/content/instruction-authors-style-files-0)

 

 

ABOUT THE JOURNAL

Traitement Automatique des Langues (TAL, ?Natural Language Processing?) is an international journal published by ATALA (French Association for Natural Language Processing) since 1960 with the support of the CNRS (French National Centre for Scientific Research). It has moved to an electronic mode of publication, with printing on demand. This has no impact however on the reviewing and selection process.

 

EDITORIAL BOARD

Guest editors:

  • Gwénolé Lecorvé (Orange)
  • John D. Kelleher (TU Dublin)

Members (under development):

  • Loïc Barrault (U. Le Mans)
  • Marion Blondel (CNRS)
  • Chloé Clavel (Telecom Paris)
  • Camille Guinaudeau (U. Paris Saclay)
  • Hervé Le Borgne (CEA)
  • Damien Lolive (U. Rennes 1)
  • Slim Ouni (U. Lorraine)

 

 

 
Back  Top

7-2TAL Journal: Special issue: review articles

TAL Journal: Special issue: review articles

http://tal-63-3.sciencesconf.org/ (page soon available)


2022 Volume 63 Number 3

Deadline for submission: 01/07/2022

Editors: Cécile Fabre, Emmanuel Morin, Sophie Rosset and Pascale Sébillot


FOCUS OF THE ISSUE:

This special issue of the journal TAL invites articles that summarize the current state of knowledge in one of the fields of natural language processing.

The article will discuss research made in the chosen field and will present how it has evolved to the most recent advances. The synthesis must be rigorous, clear and accessible to readers of the journal TAL who are not specialists in the subject of the article. It should provide a perspective on the work presented, allowing the reader to understand the relation between the various lines of research.

The topics covered are those usually targeted by the varia issues of the journal, i.e. all aspects of natural processing of written, spoken and signed languages and of computational linguistics. The theme of the article may be identified within the following fields (non-exhaustive list):

- Computational models of language, statistical learning and modeling
- Lexical and terminological resources
- Linguistic tools (tokenization, tagging, parsing, etc.)
- Intermodality and multimodality
- Language multiplicity and diversity, multilingual processing, translation
- Semantics, discourse, pragmatics, comprehension
- Information access and text mining
- Text generation and synthesis
- Speech or sign language recognition/synthesis
- Dialogue
- Evaluation
- Explicability and reproducibility
- NLP in interaction with other disciplines (e.g. digital humanities)
- NLP and ethics

Note the date of intention to submit (calendar below): the authors will have to submit an abstract of the chosen theme. This phase is important for the selection of external reviewers specialized in the subject matter.


LANGUAGE


Articles are written in English or French. Submissions in English are
accepted only if one of the co-authors is not French speaking.


THE JOURNAL


TAL (Traitement Automatique des Langues / Natural Language Processing)
(http://www.atala.org/revuetal) is an international journal published
by ATALA (French Association for Natural Language Processing) since 1960, with the support of the national centre for scientific research (CNRS). It is published in electronic format, with immediate free access to published articles.


IMPORTANT DATES (PROVISIONAL)

First call: March 2022

Intent to submit (abstract presenting the theme): April 29, 2022

Deadline for submission: July 1, 2022

Notification to authors after first review: end of September 2022

Notification to authors after second review: end of Dec 2022

Publication: February 2023


SUBMISSION FORMAT


The length of the papers must be between 20 and 25 pages.


The TAL journal has a double-blind review process. It is necessary to anonymize the article, the name of the file, and to avoid
self-references.


Style sheets are available on the journal's website
(_https://www.atala.org/content/instruction-authors-style-files-0_).


Authors are invited to submit their paper by clicking on the menu 'Paper submission' (PDF
article' menu (PDF format). To do so, you will need to have an account on the sciencesconf platform (http://www.sciencesconf.org). Click on 'create account' next to the
'Connect' button at the top of the page. To submit, come back to the
page (soon available) http://tal-63-3.sciencesconf.org/, connect to you
account and upload your submission.

SCIENTIFIC COMMITTEE

Each article is evaluated by three reviewers, two external reviewers and a member of the editorial board of the journal TAL. The list of the members of the editorial board of the journal is available at http://www.atala.org/content/comit%C3%A9-de-r%C3%A9daction-0

Back  Top

7-3Appel à contributions: le masque du locuteur, revue Langue(s) et Parole, 2022

The speaker's mask: a transdisciplinary interrogator of the complexity of speech Call for papers At the time of this call, the current health crisis still requires the world population to wear face masks designed to protect each individual from droplets and aerosols received or projected during breathing, speaking or singing. Several publications attest to the effectiveness of approved masks in this protection, which varies according to the material of the equipment and its duration of use. However, the study of these devices is also a matter for the human sciences. In particular, the language sciences are not left out in the study of the effects of the mask on the wearer's ability to be heard (Giovanelli et al. 2021) and intelligible (Palmiero et al. 2016), regardless of the communication situation or speech style (Cohn et al. 2021). Speech acoustics (Magee et al. 2020), phonetics, discourse analysis (Onipede 2021), modeling, recognition (Kodali et al. 2021), and psycholinguistics can, for example, be convened by such research regarding a current universal everyday concern. Although several publications have recently appeared in this field, many ways remain to be explored for this transdisciplinary subject. In this volume, we propose to pursue the reflection ay least through the following thematic reflections:

• Speech perception: How does hiding the lower part of the face alter the receiver’s perception of the message produced by the emitter, and how does the latter adapt to this communicational change?

• Spoken articulation: How does the articulatory discomfort experienced by the masked speaker alter the management of speech production? Does it depend on the segmental and prosodic composition of the speech, on the speaking style or on the communication situation, or even on the representation that the speaker makes of the discomfort caused by the mask to his/her interlocutor?

• Voice: What are the effects of wearing a mask on the produced, perceived and felt timbre of the spoken, declaimed or sung voice, in an ecological situation (artistic for example, or in a training context)?

 Call for papers – Langue(s) & Parole, n°7, 2022 : le masque du locuteur 4

• Modeling and recognition of speech: What modeling and recognition of a 'masked speech signal' are possible? • Transmission of speech: What are the issues in terms of acquisition and education related to wearing a face mask? (Early childhood, schooling, native and foreign languages, teaching, etc.)

• Discourse analysis: What are the discourses produced in terms of behavior and reaction, emotions and affects, feelings and proprioception, nonverbal communication and aesthetics as a result of wearing the mask? Does wearing the mask (and subsequently dropping it) have any impact (positive or negative) on the speaker's self-esteem and confidence in front of a group?

• Clinical phonetics and linguistics: What links can exist between pathologies of the areas covered by the mask and its use? Can the use of the mask cause significant alterations? What are the adaptations to be made when wearing the mask with respect to voice, speech or communication pathologies? Is wearing a mask linked to an increase in vocal fatigue in speakers who have to practice a profession that requires oral expression in front of a large audience?

• Engineering: How can knowledge of the effects of the mask on spoken communication be useful to designers of new devices that are better suited to the communication situation and/or the particularities of the target speakers? In this respect, this volume aims to bring together current research on the issues and effects of wearing a mask as they can be studied from the point of view of the various components of the language sciences (linguistics, phonetics, psycholinguistics, clinical phonetics and linguistics, didactics, sociolinguistics) and their implementation in certain communication contexts (speech therapy, psychology, artistic disciplines, etc.). These reflections, in their theoretical aspect, will contribute to the systemic analyses of oral communication, by providing, for example, data to enlighten the mechanisms of compensation, reorganization of voice, speech or discourse to this multi-effect, external disrupter (Vaxelaire et al. 2007) of language and oral communication systems. The examples of themes proposed above aim, by their diversity, to have the reader perceive the width of the epistemological span targeted by this volume of contributions. Other topics may of course be considered, provided that the research questions involved do question the impact of wearing a mask on the functioning of language and thus contribute, in a transdisciplinary approach, to develop knowledge about the latter. From an application point of view, it is hoped that the contributions collected can contribute to optimizing strategies to overcome the disturbances resulting from wearing the mask.

Coordinators Claire Pillot-Loiseau, Université Sorbonne Nouvelle, Paris, France Bernard Harmegnies, Université de Mons, Mons, Belgique

Languages of publication: French, English

Author guidelines : https://revistes.uab.cat/languesparole/languesparole/languesparole/about/submissions

Timeline: May 30, 2022: deadline for submission of article proposals to be sent to: r.langues.parole@uab.cat, claire.pillot@sorbonne-nouvelle.fr, Bernard.HARMEGNIES@umons.ac.be

End of 2022: print and online publication of the issue Appel à contributions – Call for papers – Langue(s) & Parole, n°7, 2022 : le masque du locuteur 5

References Cohn, M., Pycha, A., & Zellou, G. (2021). Intelligibility of face-masked speech depends on speaking style: Comparing casual, clear, and emotional speech. Cognition, 210, 1-5, https://doi.org/10.1016/j.cognition.2020.104570 Giovanelli, E., Valzolgher, C., Gessa, E., Todeschini, M., & Pavani, F. (2021). Unmasking the Difficulty of Listening to Talkers With Masks: lessons from the COVID-19 pandemic. i-Perception, 12(2), 1–11. https://doi.org/10.1177/ 2041669521998393 Kodali, R. K., & Dhanekula, R. (2021). Face Mask Detection Using Deep Learning. In 2021 International Conference on Computer Communication and Informatics (ICCCI) (pp. 1-5). IEEE. Magee, M., Lewis, C., Noffs, G., Reece, H., Chan, J. C., Zaga, C. J., ... & Vogel, A. P. (2020). Effects of face masks on acoustic analysis and speech perception: Implications for peri-pandemic protocols. The Journal of the Acoustical Society of America, 148(6), 3562-3568. Onipede, F. M. (2021). Nigerians' Reactions towards COVID-19 Pandemic Health Precautions: A Pragma-Semiotic Analysis, International Review of Social Sciences Research, Volume 1, Issue 1, pp. 1- 24. Palmiero, A. J., Symons, D., Morgan III, J. W., & Shaffer, R. E. (2016). Speech intelligibility assessment of protective facemasks and air-purifying respirators. Journal of occupational and environmental hygiene, 13(12), 960-968. Vaxelaire, B., Sock, R., Kleiber, G., Marsac, F. (2007). Perturbations et Réajustements. Langue et langage, Publications de l'Université Marc Bloch - Strasbourg 2.

Back  Top

7-4Research Topic (thematic issue): Science, Technology and Art in the Spoken Expression of Meaning

Research Topic (thematic issue): Science, Technology and Art in the Spoken Expression of Meaning

 

Nonverbal language is a rich source of indexical and symbolic information in speech communication and a demanding field for scientific, technological, and artistic investigation. There is no speech communicative interaction in which pragmatic meanings are not conveyed either by voice quality, speech prosody or body gestures. Structuring prosodic information in its varied forms is the means the speaker uses to give a particular meaning to her/his saying. The study of prosody of speech produced by a human speaker or a machine in real-life situations for informing, for allowing a dialogue in the context of human-human or human-machine interactions, for entertaining, for communicating and for impressing the listener can pave the way to a better understanding of how prosody shapes speech production and perception in diverse domains such as technology, art, and scientific investigation.

The main goal of this research topic is to shed light on how vocal and
visual prosodic information varies across the most diverse situations of real
life to express meaning. Work on emotional prosody, on vocal aesthetics,
on the prosody expression of different attitudes in real-life situations,
including human-machine interaction and art and entertainment, is the
privileged kind of investigation we welcome in this research topic.

Particular themes may include, but are not limited to:

1. Prosodic meaning in spontaneous situations.
2. Indexical meaning in spoken communication.
3. Prosody of speaking, declamation and singing styles.
4. Voices and talking heads in TTS systems.
5. Voices in oral poetry and synesthetic kinds of art.
6. Voices in the entertainment industry, especially in animations.
7. Voices in the cinema, theatre, TV, radio, and other media performances.
8. Vocal and visual prosody in linguistic, paralinguistic, and extralinguistic
features.
9. Vocal stereotypes, sound symbolism, and voice aesthetics
10. Sonorities and their synesthetic perceptual effects.
11. Sonorities and their effect on wellbeing, mental health, and related
areas.
12. Sonorities in sensory landscapes.
13. Sonorities in animal communication.
14. Sonorities and cross modalities.

 

Submission Deadlines

18 July 2022

Abstract

04 November 2022

Manuscript

Link to submission: https://www.frontiersin.org/research-topics/38820/science-technology-and-art-in-the-spoken-expression-of-meaning

Back  Top

7-5CfP IEEE/ACM Transactions on ASLP, ACM/TASLP Special issue on the 9th and 10th Dialog System Technology Challenge

Call for Papers
IEEE/ACM Transactions on Audio, Speech
and Language Processing

ACM/TASLP Special Issue on

The Ninth and Tenth Dialog System Technology Challenge

 
Call for Participation: The Dialog System Technology Challenge (DSTC) is an ongoing series of research competitions for dialog systems. To accelerate the development of new dialog technologies, the DSTCs have provided common testbeds for various research problems. The Ninth and Tenth Dialog System Technology Challenge (DSTC9&10) consist of the following nine main tracks.
 

DSTC9

  • Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access
  • Multi-domain Task-oriented Dialog Challenge II
  • Interactive Evaluation of Dialog
  • SIMMC: Situated Interactive Multi-Modal Conversational AI

DSTC10

  • MOD: Internet Meme Incorporated Open-domain Dialog
  • Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations
  • SIMMC 2.0: Situated Interactive Multimodal Conversational AI
  • Reasoning for Audio Visual Scene-Aware Dialog
  • Automatic Evaluation and Moderation of Open-domain Dialogue Systems
This special issue will host work on any of the DSTC9&10 tasks. Papers may describe entries in the official DSTC9&10 challenge, or any research utilizing their datasets irrespective of the participation in the official challenge. We also welcome papers that analyze the DSTC9&10 tasks or results themselves. Finally, we also invite papers on previous DSTC tasks as well as general technical papers on any dialog-related research problems.
 

Submission Guidelines


Visit the Information for Authors page for details on the SPS website.
Submit your paper on the ScholarOne system.
Back  Top

7-6CfP IEEE/ACM Transactions on Audio, Speech and Language Processing
 
 

Call for Papers
IEEE/ACM Transactions on Audio, Speech
and Language Processing

ACM/TASLP Special Issue on

The Ninth and Tenth Dialog System Technology Challenge

 
Call for Participation: The Dialog System Technology Challenge (DSTC) is an ongoing series of research competitions for dialog systems. To accelerate the development of new dialog technologies, the DSTCs have provided common testbeds for various research problems. The Ninth and Tenth Dialog System Technology Challenge (DSTC9&10) consist of the following nine main tracks.
 

DSTC9

  • Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access
  • Multi-domain Task-oriented Dialog Challenge II
  • Interactive Evaluation of Dialog
  • SIMMC: Situated Interactive Multi-Modal Conversational AI

DSTC10

  • MOD: Internet Meme Incorporated Open-domain Dialog
  • Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations
  • SIMMC 2.0: Situated Interactive Multimodal Conversational AI
  • Reasoning for Audio Visual Scene-Aware Dialog
  • Automatic Evaluation and Moderation of Open-domain Dialogue Systems
This special issue will host work on any of the DSTC9&10 tasks. Papers may describe entries in the official DSTC9&10 challenge, or any research utilizing their datasets irrespective of the participation in the official challenge. We also welcome papers that analyze the DSTC9&10 tasks or results themselves. Finally, we also invite papers on previous DSTC tasks as well as general technical papers on any dialog-related research problems.
 

Submission Guidelines


Visit the Information for Authors page for details on the SPS website.
Submit your paper on the ScholarOne system.

 
Submit Manuscript
 

Important Dates

  • Manuscript Submissions due: October 15, 2022
  • First review completed: December 15, 2022
  • Revised manuscript due: January 15, 2023
  • Second review completed: March 15, 2023
  • Final manuscript due: April 30, 2023
  • Expected Publication: July 2023

Guest Editors

  • Koichiro Yoshino, RIKEN, Japan
  • Chulaka Gunasekara, IBM Research AI, USA

For questions regarding special issue, contact: steering@dstc.community

 

 
 
Back  Top

7-7IEEE/ACM TASLP Joint Special Issue on the Ninth and Tenth Dialog System Technology Challenge

IEEE/ACM TASLP Joint Special Issue on the Ninth and Tenth Dialog System Technology Challenge

 

Call for Participation

Websites: https://signalprocessingsociety.org/blog/ieee-acmtaslp-special-issue-ninth-and-tenth-dialog-system-technology-challenge

https://dstc10.dstc.community/call-for-talsp-papers

===========================================================================================

 

The Dialog System Technology Challenge (DSTC) is an ongoing series of research competitions for dialog systems. To accelerate the development of new dialog technologies, the DSTCs have provided common testbeds for various research problems. The Ninth and Tenth Dialog System Technology Challenge (DSTC9&10) consist of the following nine main tracks.

 

DSTC9:

-Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access

 

-Multi-domain Task-oriented Dialog Challenge II

 

-Interactive Evaluation of Dialog

 

-SIMMC: Situated Interactive Multi-Modal Conversational AI

 

DSTC10:

-MOD: Internet Meme Incorporated Open-domain Dialog

 

-Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

 

-SIMMC 2.0: Situated Interactive Multimodal Conversational AI

 

-Reasoning for Audio Visual Scene-Aware Dialog

 

-Automatic Evaluation and Moderation of Open-domain Dialogue Systems

 

 

This special issue will host work on any of the DSTC9&10 tasks. Papers may describe entries in the official DSTC9&10 challenge, or any research utilizing their datasets irrespective of the participation in the official challenge. We also welcome papers that analyze the DSTC9&10 tasks or results themselves. Finally, we also invite papers on previous DSTC tasks as well as general technical papers on any dialog-related research problems.

 

Submission requirements

-----------------------

You can get the author guide from the following link:

 

https://signalprocessingsociety.org/publications-resources/information-authors

 

Submission site

-----------------------

Submit your paper at mc.manuscriptcentral.com/tasl-ieee

 

Important Dates

-----------------------

-Manuscript submission date: October 15, 2022

 

-First Review Completed: December 15, 2022

 

-Revised Manuscript Due: January 15, 2023

 

-Second Review Completed: March 15, 2023

 

-Final Manuscript Due: April 30, 2023

 

-Expected publication date: July 2023

 

 

 

CONTACT

-------

For any query regarding this special issue please contact steering@dstc.community

 

Guest Editors

Koichiro Yoshino, RIKEN, Japan

Chulaka Gunasekara, IBM Research AI, USA

Back  Top

7-8Special Issue at the IEEE Transactions on Multimedia:'Pre-trained Models for Multi-Modality Understanding'

Dear Colleagues,

We are organizing a Special Issue at the IEEE Transactions on Multimedia:
'Pre-trained Models for Multi-Modality Understanding'

Submission deadline: January 15, 2023
First Review: April 1, 2023
Revisions due: June 1, 2023
Second Review: August 15, 2023
Final Manuscripts: September 15, 2023
Publication date: September 30, 2023

For more details please visit our CFP website at:
https://signalprocessingsociety.org/sites/default/files/uploads/special_issues_deadlines/TMM_SI_pre_trained.pdf
*********************************************************************************************************************************************************

Best regards,
Zakia Hammal

Back  Top

7-9ACM/TASLP Special Issue on The Ninth and Tenth Dialog System Technology Challenge

Call for Papers
IEEE/ACM Transactions on Audio, Speech
and Language Processing

ACM/TASLP Special Issue on

The Ninth and Tenth Dialog System Technology Challenge

[Deadline Extended: 15 November 2022]

 

Call for Participation: The Dialog System Technology Challenge (DSTC) is an ongoing series of research competitions for dialog systems. To accelerate the development of new dialog technologies, the DSTCs have provided common testbeds for various research problems. The Ninth and Tenth Dialog System Technology Challenge (DSTC9&10) consist of the following nine main tracks.
 

DSTC9

  • Beyond Domain APIs: Task-oriented Conversational Modeling with Unstructured Knowledge Access

  • Multi-domain Task-oriented Dialog Challenge II

  • Interactive Evaluation of Dialog

  • SIMMC: Situated Interactive Multi-Modal Conversational AI

DSTC10

  • MOD: Internet Meme Incorporated Open-domain Dialog

  • Knowledge-grounded Task-oriented Dialogue Modeling on Spoken Conversations

  • SIMMC 2.0: Situated Interactive Multimodal Conversational AI

  • Reasoning for Audio Visual Scene-Aware Dialog

  • Automatic Evaluation and Moderation of Open-domain Dialogue Systems

This special issue will host work on any of the DSTC9&10 tasks. Papers may describe entries in the official DSTC9&10 challenge, or any research utilizing their datasets irrespective of the participation in the official challenge. We also welcome papers that analyze the DSTC9&10 tasks or results themselves. Finally, we also invite papers on previous DSTC tasks as well as general technical papers on any dialog-related research problems.

 
 

 


 

Submission Guidelines


Visit the 
Information for Authors page for details on the SPS website.
Submit your paper on the 
ScholarOne system.

 

 

 

Manuscript Submissions due: November 15, 2022 (Extended)

  • First review completed: January 15, 2023

  • Revised manuscript due: February 15, 2023

  • Second review completed: April 15, 2023

  • Final manuscript due: May 30, 2023

  • Expected Publication: August 2023

Guest Editors

  • Koichiro Yoshino, RIKEN, Japan

  • Chulaka Gunasekara, IBM Research AI, USA

For questions regarding special issue, contact: steering@dstc.community

Back  Top

7-10CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
 
Open-Access Journal 'Sensors' (ISSN 1424-8220). Impact factor: 3.847
 
 
Deadline for manuscript submissions: 10 June 2023
 
Guest editor:
Oliver Niebuhr
Associate Professor of Communication Technology
Head of the Acoustics LAB
Centre for Industrial Electronics
University of Southern Denmark, Sonderborg
 
 
 
Motivation of the special issue
-------------------------------
Research is currently at a point at which the development of cheap and powerful sensor technology coincides with the increasingly complex multimodal analysis of human behavior. In other words, we are at a point where the application of existing sensor technology as well as the development of new sensor technology can significantly advance our understanding of human behavior. Our special issue takes up this momentum and is intended to bring together for the first time strongly cross-disciplinary research under one roof – so that the individual disciplines can inform, inspire, and stimulate each other. Accordingly, human behavior is meant in a very broad sense and supposed to include, besides speech behavior, the manifestations of emotions and moods, body language, body movements (including sports), pain, stress, and health issues related to behavior or behavior changes, behavior towards robots or machines in general, etc. Examples of relevant sensor technologies are EEG, EMG, SCR (skin-conductance response), Heart Rate, Breathing, EGG (Electroglottogram), tactile/pressure sensors, gyroscopes and/or accelerometers capturing body movements, VR/AR-related sensor technologies, pupillometry, etc. Besides empirical contributions addressing current questions, outlines of future trends, review articles, applications of AI to sensor technology, and presentations of new pieces of sensor technology are very welcome too.
 
Submission information
-----------------------
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to the website. Once you are registered, go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement via the MDPI's website.
 
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
 
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Back  Top

7-11Appel à contributions pour le numéro 39 (2023) de la revue TIPA

appel à contributions pour le numéro 39 (2023) de la revue TIPA intitulé :

 

Discours, littératie et littérature numérique : quels enjeux créatifs et didactiques ?

 

Appel : https://journals.openedition.org/tipa/6064

 

Délai de soumission : 1er février 2023

 

Back  Top

7-12Etudes créoles

Nous avons le grand plaisir de vous annoncer la parution d'Etudes créoles sur la plateforme de revues en ligne OpenEdition,  https://journals.openedition.org/etudescreoles.

La revue Études créoles publie des analyses linguistiques des langues créoles ainsi que de l’histoire, de l’anthropologie, des littératures et des cultures des mondes créoles.

Elle a été publiée en version papier à l'Université de Provence de 1978 à 2010. Depuis 2015, elle est éditée par le Laboratoire Parole et Langage (LPL), UMR du CNRS et d'Aix-Marseille Université, dans une version électronique en accès libre.

Après l'acceptation de sa candidature auprès d'OpenEdition et l'obtention d'une subvention du Fonds national pour la science ouverte en 2021, nous avons pu effectuer la transition vers OpenEdition. Grâce à son nouvel hébergement, la revue bénéficiera désormais d'un référencement optimal.

Nous vous souhaitons une bonne lecture et nous espérons avoir de nouvelles soumissions d'articles.

L'équipe de rédaction d'Etudes créoles

Back  Top

7-13CfP IEEE/ACM Transactions on ASLP, ACM/TASLP Special issue on Speech and Language Technologies for Low-re Languagessource

Call for Papers
IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)

TASLP Special Issue on

Speech and Language Technologies for Low-resource Languages

 
Speech and language processing is a multi-disciplinary research area that focuses on various aspects of natural language processing and computational linguistics. Speech and language technologies deal with the study of methods and tools to develop innovative paradigms for processing human languages (speech and writing) that can be recognized by machines. Thanks to the incredible advances in machine learning and artificial intelligence techniques that effectively interpret speech and textual sources. In general, speech technologies include a series of artificial intelligence algorithms that enables the computer system to produce, analyze, modify and respond to human speech and texts. It establishes a more natural interaction between the human and the computers and also the translation between all the human languages and effectively analyzes the text and speech. These techniques have significant applications in computational linguistics, natural language processing, computer science, mathematics, speech processing, machine learning, and acoustics. Another important application of this technology is a machine translation of the text and voice.
 
There exists a huge gap between speech and language processing in low-resource languages as they have lesser computational resources. With the ability to access the vast amount of computational sources from various digital sources, we can resolve numerous language processing problems in real-time with enhanced user experience and productivity measures. Speech and language processing technologies for low-resource languages are still in their infancy. Research in this stream will enhance the likelihood of these languages becoming an active part of our life, as their importance is paramount. Furthermore, the societal shift towards digital media along with spectacular advances in digital media along with processing power, computational storage, and software capabilities with a vision of transferring low-resource computing language resources into efficient computing models.

This special issue aims to explore the language and speech processing technologies to novel computational models for processing speech, text, and language. The novel and innovative solutions focus on content production, knowledge management, and natural communication of the low-resource languages.  We welcome researchers and practitioners working in speech and language processing to present their novel and innovative research contributions for this special section.
 

Topics of Interest


Topics of interest for this special issue include (but are not limited to):
  • Artificial intelligence assisted speech and language technologies for low-resource languages
  • Pragmatics for low resource languages
  • Emerging trends in knowledge representation for low resource languages
  • Machine translation for low resource language processing
  • Sentiment and statistical analysis for low resource languages
  • Automatic speech recognition and speech technology for low resource languages
  • Multimodal analysis for low resource languages
  • Information retrieval and extraction of low resource languages
  • Augment mining for low resource language processing
  • Text summarization and speech synthesis
  • Sentence-level semantics for speech recognition

Submission Guidelines


Manuscripts should be submitted through the Manuscript Central system

 

Important Dates

  • Manuscript Submissions due:  29 December 2022
  • Authors notification: 10 February 2023
  • Revised version submission: 15 April 2023
  • Final decision notification: 20 June 2023

 


 

 

Guest Editors

Back  Top

7-14CfP Special issue od Advanced Robotics on Multimodal Processing and Robotics for Dialogue Systems

Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
               
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan  2023

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.
This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.
The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including 'dialogue robot competition' at IROS2022.
In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:
 
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.
Note that word count includes references. Captions and author bios are not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds the word limit.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA