ISCA - International Speech
Communication Association


ISCApad Archive  »  2023  »  ISCApad #302  »  Journals

ISCApad #302

Friday, August 11, 2023 by Chris Wellekens

7 Journals
7-1Special Issue at the IEEE Transactions on Multimedia:'Pre-trained Models for Multi-Modality Understanding'

Dear Colleagues,

We are organizing a Special Issue at the IEEE Transactions on Multimedia:
'Pre-trained Models for Multi-Modality Understanding'

Submission deadline: January 15, 2023
First Review: April 1, 2023
Revisions due: June 1, 2023
Second Review: August 15, 2023
Final Manuscripts: September 15, 2023
Publication date: September 30, 2023

For more details please visit our CFP website at:
https://signalprocessingsociety.org/sites/default/files/uploads/special_issues_deadlines/TMM_SI_pre_trained.pdf
*********************************************************************************************************************************************************

Best regards,
Zakia Hammal

Top

7-2CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
 
Open-Access Journal 'Sensors' (ISSN 1424-8220). Impact factor: 3.847
 
 
Deadline for manuscript submissions: 10 June 2023
 
Guest editor:
Oliver Niebuhr
Associate Professor of Communication Technology
Head of the Acoustics LAB
Centre for Industrial Electronics
University of Southern Denmark, Sonderborg
 
 
 
Motivation of the special issue
-------------------------------
Research is currently at a point at which the development of cheap and powerful sensor technology coincides with the increasingly complex multimodal analysis of human behavior. In other words, we are at a point where the application of existing sensor technology as well as the development of new sensor technology can significantly advance our understanding of human behavior. Our special issue takes up this momentum and is intended to bring together for the first time strongly cross-disciplinary research under one roof – so that the individual disciplines can inform, inspire, and stimulate each other. Accordingly, human behavior is meant in a very broad sense and supposed to include, besides speech behavior, the manifestations of emotions and moods, body language, body movements (including sports), pain, stress, and health issues related to behavior or behavior changes, behavior towards robots or machines in general, etc. Examples of relevant sensor technologies are EEG, EMG, SCR (skin-conductance response), Heart Rate, Breathing, EGG (Electroglottogram), tactile/pressure sensors, gyroscopes and/or accelerometers capturing body movements, VR/AR-related sensor technologies, pupillometry, etc. Besides empirical contributions addressing current questions, outlines of future trends, review articles, applications of AI to sensor technology, and presentations of new pieces of sensor technology are very welcome too.
 
Submission information
-----------------------
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to the website. Once you are registered, go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement via the MDPI's website.
 
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
 
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Top

7-3Appel à contributions pour le numéro 39 (2023) de la revue TIPA

appel à contributions pour le numéro 39 (2023) de la revue TIPA intitulé :

 

Discours, littératie et littérature numérique : quels enjeux créatifs et didactiques ?

 

Appel : https://journals.openedition.org/tipa/6064

 

 

Nous vous informons que l’appel à contributions pour le numéro 39 (2023) de la revue TIPA a été prolongé jusqu’au 15 juin prochain :

 

Discours, littératie et littérature numérique : quels enjeux créatifs et didactiques ?

 

AAC : https://journals.openedition.org/tipa/6064

 

Délai de soumission : 15 juin 2023

Top

7-4CfP Special issue od Advanced Robotics on Multimodal Processing and Robotics for Dialogue Systems

Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
               
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan  2023

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.
This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.
The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including 'dialogue robot competition' at IROS2022.
In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:
 
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.
Note that word count includes references. Captions and author bios are not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds the word limit.

Top

7-5CfP ACM Trans.MCCA, Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation

ACM Transactions on Multimedia Computing, Communications, and Applications
Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation
Impact Factor 4.094
https://mc.manuscriptcentral.com/tomm
Submission deadline: 31 March 2023


*** CALL FOR PAPERS ***

[Guest Editors]
Bogdan Ionescu, Universitatea Politehnica din Bucuresti, România
Ioannis Patras, Queen Mary University of London, UK
Henning Muller, University of Applied Sciences Western Switzerland, Switzerland
Alberto Del Bimbo, Università degli Studi di Firenze, Italy

[Scope]
In the current context of Machine Learning (ML) and Deep Learning
(DL), data and especially high-quality data are central for ensuring
proper training of the networks. It is well known that DL models
require an important quantity of annotated data to be able to reach
their full potential. Annotating content for models is traditionally
made by human experts or at least by typical users, e.g., via
crowdsourcing. This is a tedious task that is time consuming and
expensive -- massive resources are required, content has to be curated
and so on. Moreover, there are specific domains where data
confidentiality makes this process even more challenging, e.g., in the
medical domain where patient data cannot be made publicly available,
easily.

With the advancement of neural generative models such as Generative
Adversarial Networks (GAN), or, recently diffusion models, a promising
way of solving or alleviating such problems that are associated with
the need for domain specific annotated data is to go toward realistic
synthetic data generation. These data are generated by learning
specific characteristics of different classes of target data. The
advantage is that these networks would allow for infinite variations
within those classes while producing realistic outcomes, typically
hard to distinguish from the real data. These data have no proprietary
or confidentiality restrictions and seem a viable solution to generate
new datasets or augment existing ones. Existing results show very
promising results for signal generation, images etc.

Nevertheless, there are some limitations that need to be overcome so
as to advance the field. For instance, how can one control/manipulate
the latent codes of GANs, or the diffusion process, so as to produce
in the output the desired classes and the desired variations like real
data? In many cases, results are not of high quality and selection
should be made by the user, which is like manual annotation. Bias may
intervene in the generation process due to the bias in the input
dataset. Are the networks trustworthy? Is the generated content
violating data privacy? In some cases one can predict based on a
generated image the actual data source used for training the network.
Would it be possible to train the networks to produce new classes and
learn causality of the data? How do we objectively assess the quality
of the generated data? These are just a few open research questions.

[Topics]
In this context, the special issue is seeking innovative algorithms
and approaches addressing the following topics (but is not limited
to):
- Synthetic data for various modalities, e.g., signals, images,
volumes, audio, etc.
- Controllable generation for learning from synthetic data.
- Transfer learning and generalization of models.
- Causality in data generation.
- Addressing bias, limitations, and trustworthiness in data generation.
- Evaluation measures/protocols and benchmarks to assess quality of
synthetic content.
- Open synthetic datasets and software tools.
- Ethical aspects of synthetic data.

[Important Dates]
- Submission deadline: 31 March 2023
- First-round review decisions: 30 June 2023
- Deadline for revised submissions: 31 July 2023
- Notification of final decisions: 30 September 2023
- Tentative publication: December 2023

[Submission Information]
Prospective authors are invited to submit their manuscripts
electronically through the ACM TOMM online submission system (see
https://mc.manuscriptcentral.com/tomm) while adhering strictly to the
journal guidelines (see https://tomm.acm.org/authors.cfm). For the
article type, please select the Special Issue denoted SI: Realistic
Synthetic Data: Generation, Learning, Evaluation.

Submitted manuscripts should not have been published previously, nor
be under consideration for publication elsewhere. If the submission is
an extended work of a previously published conference paper, please
include the original work and a cover letter describing the new
content and results that were added. According to ACM TOMM publication
policy, previously published conference papers can be eligible for
publication provided that at least 40% new material is included in the
journal version.

[Contact]
For questions and further information, please contact Bogdan Ionescu /
bogdan.ionescu@upb.ro.

[Acknowledgement]
The Special Issue is endorsed by the AI4Media 'A Centre of Excellence
delivering next generation AI Research and Training at the service of
Media, Society and Democracy' H2020 ICT-48-2020 project
https://www.ai4media.eu/.

On behalf of the Guest Editors,
Bogdan Ionescu
https://www.aimultimedialab.ro/

Top

7-6IEEE/ACM Transactions ASLP, TASLP Special Issue: Speech, and Language Technologies for Low-resources Languages

Call for Papers
IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)

TASLP Special Issue on

Speech and Language Technologies for Low-resource Languages

 
Speech and language processing is a multi-disciplinary research area that focuses on various aspects of natural language processing and computational linguistics. Speech and language technologies deal with the study of methods and tools to develop innovative paradigms for processing human languages (speech and writing) that can be recognized by machines. Thanks to the incredible advances in machine learning and artificial intelligence techniques that effectively interpret speech and textual sources. In general, speech technologies include a series of artificial intelligence algorithms that enables the computer system to produce, analyze, modify and respond to human speech and texts. It establishes a more natural interaction between the human and the computers and also the translation between all the human languages and effectively analyzes the text and speech. These techniques have significant applications in computational linguistics, natural language processing, computer science, mathematics, speech processing, machine learning, and acoustics. Another important application of this technology is a machine translation of the text and voice.
 
There exists a huge gap between speech and language processing in low-resource languages as they have lesser computational resources. With the ability to access the vast amount of computational sources from various digital sources, we can resolve numerous language processing problems in real-time with enhanced user experience and productivity measures. Speech and language processing technologies for low-resource languages are still in their infancy. Research in this stream will enhance the likelihood of these languages becoming an active part of our life, as their importance is paramount. Furthermore, the societal shift towards digital media along with spectacular advances in digital media along with processing power, computational storage, and software capabilities with a vision of transferring low-resource computing language resources into efficient computing models.

This special issue aims to explore the language and speech processing technologies to novel computational models for processing speech, text, and language. The novel and innovative solutions focus on content production, knowledge management, and natural communication of the low-resource languages.  We welcome researchers and practitioners working in speech and language processing to present their novel and innovative research contributions for this special section.
 

Topics of Interest


Topics of interest for this special issue include (but are not limited to):
  • Artificial intelligence assisted speech and language technologies for low-resource languages
  • Pragmatics for low resource languages
  • Emerging trends in knowledge representation for low resource languages
  • Machine translation for low resource language processing
  • Sentiment and statistical analysis for low resource languages
  • Automatic speech recognition and speech technology for low resource languages
  • Multimodal analysis for low resource languages
  • Information retrieval and extraction of low resource languages
  • Augment mining for low resource language processing
  • Text summarization and speech synthesis
  • Sentence-level semantics for speech recognition

Submission Guidelines


Manuscripts should be submitted through the Manuscript Central system

 

 

Important Dates

  • Submissions deadline:  30 May 2023
  • Authors notification: 25 July 2023
  • Revised version submission: 29 September 2023
  • Final decision notification: 15 December 2023

Guest Editors

Top

7-7CfP IEEE-JSTSP Special issue on Near-Field Signal Processing: Algorthms, Implementations and Applications

Call for Papers
IEEE Journal of Selected Topics in Signal Processing

JSTSP Special Issue on

Near-Field Signal Processing:
Algorithms, Implementations and Applications

 
Array signal processing technologies are moving toward to the employment of small and densely packed sensors yielding extremely large aperture arrays to provide higher angular resolution and beamforming gain. With the extended array aperture and small wavelength, the signal wavefront at the receiver is no longer plane-wave when the receiver is in near-field, i.e., it is closer to the transmitter than the Fraunhofer distance. In such a scenario, the spherical wavefront should be taken into consideration since the system performance depends on the propagation distance as well as the direction of the signal of interest. Lately, there has also been a significant paradigm shift in both radar and communication communities toward operating in higher frequencies, e.g., millimeter-wave and terahertz (THz) band which require employing massive antenna arrays to achieve enhanced communications data rate and high-resolution sensing. To this end, near-field signal processing becomes a key enabling technique to provide spatial multiplexing with increased degrees of freedom and high-resolution with range-dependent vary narrow beamwidth. Furthermore, while near-field signal processing is regarded as a new phenomenon in wireless communications and sensing, it has been a long-standing problem in other applications with short propagation distances, e.g., ultrasound, acoustics, microscopy, crystallography, spectroscopy, and optics. This special issue aims to bring together researchers from both academia and industry to introduce the original works on near-field signal processing, hitherto not made available.
 

Topics of Interest


Topics of interest for this special issue include (but are not limited to):
  • Signal processing for near-field localization, direction-of-arrival estimation, and sensing
  • Spherical processing for reactive and radiative near-field, and measurements for massive MIMO communications
  • Signal processing for short-range THz communications and spatial-wideband effect
  • Active/passive near-field beamformer design for holographic-surface-assisted wireless systems
  • Applications of near-field in integrated sensing and communications
  • Novel sensor array/reflecting surface design for near-field beamforming and backscattering
  • Theoretical performance analyses and fundamental limits in near-field signal processing
  • Novel signal processing techniques to incorporate the electromagnetics and physics of near-field beamforming
  • Recent advances in mid- and near-field optics via coded diffraction patterns
  • Near-field synthetic aperture sounding and radar imaging
  • Advanced near-field signal processing techniques for ultrasound, microphone arrays, sonar, and acoustics
  • Near-field propagation modeling and hardware prototyping in microscopy, crystallography, and optics
  • Near-field wireless power transfer for 5G and beyond IoT applications
  • Real-world prototypes and testbeds for near-field signal processing systems

Submission Guidelines


The Guest Editors also welcome creative papers outside the areas listed above but related to the overall scope of the special issue. Prospective authors may contact the Guest Editors to ascertain interest on topics that are not listed, should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscripts at http://mc.manuscriptcentral.com/jstsp-ieee. All submitted manuscripts will be peer-reviewed according to the standard IEEE process.

 

Important Dates

  • Manuscript Submissions due: November 15, 2023
  • First review due: January 31, 2024
  • Revised manuscript due: February 29, 2024
  • Second review due: March 31, 2024
  • Final manuscript due: May 15, 2024
  • Publication: Third Quarter 2024

Guest Editors

Top

7-8CfP MDPI: special issue on Prosody and Immigration in Languages

Dear Speech Prosody SIG members,

 

Rajiv Rao is inviting submissions for a special issue on Prosody and Immigration in Languages, a MDPI Journal.  Please see

https://www.mdpi.com/journal/languages/special_issues/DTB64LM303 for details.

 

 

Research on minority immigrant languages has gained significant traction in the last decade-plus, primarily due to a significant body of research on heritage languages (e.g., Montrul, 2015; Polinsky, 2018; Polinsky & Montrul, 2021; among many others). Developments in the phonetics and phonology of heritage languages have lagged behind those in other linguistic areas, but recent years have seen significant growth in work on sound systems as well (see, e.g., Chang, 2021; Rao, 2016, in press), especially in North America, thanks in large part to research on Spanish in the US (for an overview, see Rao, 2019) and studies based on the Heritage Language Variation and Change Corpus in Toronto (Nagy, 2011). However, within the fields of heritage (and, in general, minority immigrant language) phonetics and phonology, prosody remains relatively understudied, and within the realm of immigrant language prosody, we still know very little about issues such as cross-generational change, longitudinal outcomes, child versus adolescent versus adult data, older first-generation immigrants who have resided in the host country for multiple decades versus monolingual homeland speakers, the role of source input varieties, the influence of a wide range of social (level of education, age, gender, rural versus urban settings, etc.) and affective (e.g., attitudes, emotions, motivation) variables, speech rhythm, intonation across a variety of pragmatic contexts, variation in lexical tone, speakers of such languages outside of North America, and the effects of minority language prosody on local majority varieties (by no means is this an exhaustive list).

The goal of this Special Issue is to fill existing holes in the literature on prosody by addressing the topics listed above (among other possibilities), while highlighting the need for increased comparisons between first-generation immigrants and homeland speakers, as well as a wider range of coverage of languages and geographies in general (e.g., Calhoun, 2015 versus Calhoun et al., in press for data based in Oceania). Finally, this special issue complements other ones hosted by Languages:
https://www.mdpi.com/journal/languages/special_issues/Immigrant_Refugee_Languagees
https://www.mdpi.com/journal/languages/special_issues/multilingualism_migrant

Given that prosody is a key component of human communication (e.g., Gussenhoven & Chen, 2020) and that language and cultural contact caused by international movement are pervasive in many regions of the world, learning more about the interaction of these two concepts is important, not only to expand on the recent growth in heritage language sound systems, but also to gain a deeper understanding of the underpinnings of prosodic variation (for a recent contribution to this area, see Armstrong et al., 2022).

We request that, prior to submitting a manuscript, interested authors initially submit a proposed title and an abstract of 400–600 words summarizing their intended contribution. Please send it to the Guest Editor (Rajiv Rao; rgrao@wisc.edu) or to the Languages Editorial Office (languages@mdpi.com). Abstracts will be reviewed by the Guest Editor for the purposes of ensuring proper fit within the scope of the Special Issue. Full manuscripts will undergo double-blind peer-review.

Tentative completion schedule:

  • Abstract submission deadline: May 15, 2023
  • Notification of abstract acceptance: May 31, 2023
  • Full manuscript deadline: August 31, 2023
Top

7-9Revue TAL Numero spécial: Explicabilité des modèles de traitement automatique des langues
La revue TAL (Traitement Automatique des Langues) vous invite à soumettre une contribution au numéro spécial 64(3) sur le thème *Explicabilité des modèles de traitement automatique des langues*.

https://tal-64-3.sciencesconf.org/

Soumission des articles : 15 octobre 2023


LA REVUE TAL

L’ATALA publie depuis 1960 la revue internationale Traitement Automatique des Langues (anciennement La Traduction Automatique puis TA Informations), avec le concours du CNRS. Cette revue paraît trois fois par an.
La revue a une politique open-access : la soumission, la publication et l'accès aux articles publiés sont gratuits. Les articles publiés seront disponibles sur le site de l'ATALA et sur l'ACL Anthology.
Les articles sont écrits en français ou en anglais. Les soumissions en anglais ne sont acceptées que si l’un des co-auteurs n’est pas francophone.

RÉDACTEURS EN CHEF INVITÉS

Guillaume Wisniewski, Université Paris Cité, LLF
Marianna Apidianaki, University of Pennsylvania

APPEL À CONTRIBUTIONS

La capacité des modèles neuronaux à construire, sans supervision explicite, des représentations de la langue a contribué aux progrès spectaculaires réalisés, ces dernières années, par les systèmes de traitement de la langue et de la parole. Si ces représentations permettent de développer des systèmes pour de nombreuses langues et de nombreux domaines, leur utilisation, notamment par l'intermédiaire des modèles pré-entrainés comme BERT, se fait au détriment de l'interprétabilité des décisions : il n'est généralement pas possible de savoir pourquoi un système prend telle ou telle décision et les raisons derrière les bonnes performances des modèles de l'état de l'art restent, en grande partie, inconnues. Afin de répondre à ces questions, de plus en plus de travaux s’intéressent au problème de l’explicabilité des systèmes de TAL et explorent plusieurs questions liées : 
- à l’interprétation des modèles (model analysis & interpretability) qui ont pour objectif d’identifier les informations encodées dans les représentations neuronales [1] ;
- au développement de méthodes  capables de décrire et de justifier les étapes du raisonnement (model explicability) qui a permis aux modèles d’aboutir à une réponse, par exemple avec des chaînes de pensées (Chain-of-Thought) [2] ou des représentations structurées [4] ;
- à l'évaluation des méthodes d'explication et leur véracité (faithfulness), c’est-à-dire si les explications proposées reflètent le processus de raisonnement derrière les prédictions du modèle [3] ; 
- au développement de systèmes « surs », capables de s’auto-justifier (model accountability).

Ce numéro thématique de la revue TAL a pour objectif de faire le point à la fois sur les méthodes d'analyse et d'explication des systèmes de TAL, mais aussi sur l'état de nos connaissances par rapport aux capacités langagières des modèles neuronaux et les limites de ceux-ci.

Les articles sollicités concernent les thématiques suivantes, sans y être limités pour autant :
- les méthodes permettant d'expliquer les décisions des réseaux de neurones (identification des éléments saillants, explication sous forme de texte libre, ...) et notamment celles, comme les méthodes d’analyse contre-factuelle, permettant d’établir des relations de causalité entre une prédiction et une entrée (ou une partie de celle-ci) ;
- les méthodes dites de “probing” permettant d'identifier les connaissances linguistiques et les connaissances du monde capturées par les représentations neuronales ;
- les méthodes permettant de distinguer les informations qui sont uniquement capturées par les réseaux de neurones, de celles qui sont véritablement utilisées par les modèles pour proposer une réponse (corrélation versus causalité) ;
- les méthodes utilisées pour analyser les modèles neuronaux dans des domaines connexes (linguistique expérimentale, vision par ordinateur, psychologie, etc) ;
- l'identification des biais des modèles de langue neuronaux ;
- l’utilisation de méthodes de prompting pour générer des explications avec du texte ou des représentations structurées ;
- l'étude sur des langages artificiels ou des exemples linguistiquement motivés ;
- l'évaluation des méthodes d'explication et, notamment, la définition de critères d’évaluation comme la véracité (faithfulness) ou la plausibilité (plausibility) et l’estimation de ces critères.

FORMAT

La longueur des articles doit être de 20 à 25 pages.

La revue TAL a un processus de relecture en double-aveugle. Merci d’anonymiser votre article, le nom du fichier, et de veiller à éviter les auto-références.

Les feuilles de style sont disponibles en ligne sur le site de la revue (http://www.atala.org/content/instructions-aux-auteurs-feuilles-de-style-0).

Les chercheurs ayant l’intention de soumettre une contribution sont invités à déposer leur article en cliquant sur le menu « Soumission d’un article » (format PDF). Pour cela, si ce n’est déjà fait, s’inscrire sur le site http://www.sciencesconf.org (en haut à gauche, « créer un compte »), puis revenir sur la page https://tal-64-3.sciencesconf.org, se connecter et effectuer le dépôt.

DATES IMPORTANTES

Soumission des articles : 15 octobre 2023
Notification aux auteurs après première relecture : décembre 2023
Notification aux auteurs après seconde relecture : février 2024
Publication : Avril 2024

RÉRÉRENCES

[1] Interpretability and Analysis in Neural NLP (Belinkov et al., ACL 2020) 
[2] Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (Wei et al., NeurIPS 2022)
[3] Towards Faithful Model Explanation in NLP: A Survey (Lyu et al., arxiv 2023)
[4] Causal Reasoning of Entities and Events in Procedural Texts (Zhang et al., EACL Findings 2023)

Top

7-10Special issue: Embodied Conversational Systems for Human-Robot Interaction in Dialogue & Discourse journal.
We are delighted to announce the Special Issue on Embodied Conversational Systems for HRI in the Dialogue & Discourse journal. 
 
Special Issue Title: Embodied Conversational Systems for Human-Robot Interaction
 

Topic Area

Conversational systems such as chatbots and virtual assistants have become increasingly popular in recent years. This technology has the potential to enhance Human-Robot Interaction (HRI) and improve the user experience. However, there are significant challenges in designing and implementing effective conversational systems for HRI that need to be addressed (cf. Devillers et al. 2020; Lison & Kennington 2023). This special issue aims to bring together researchers and practitioners to explore the opportunities and challenges in developing conversational systems for human-robot interaction.

Conversational systems are an important component of human-robot interaction because they enable more natural and intuitive communication between humans and robots. By leveraging research in areas such as dialogue systems, natural language understanding, natural language generation and multi-modal interaction, robots can become more accessible, usable, and engaging. Conversational systems can enable robots to better understand and respond to human emotions and social cues. By analysing speech patterns, facial expressions, and other nonverbal cues, conversational systems can help robots to better understand human emotions and tailor their responses accordingly. This can help to create more engaging and satisfying interactions between humans and robots, which is important for applications such as healthcare, education, and entertainment. Conversational systems can also help to personalise interactions between humans and robots, by adapting to the individual needs, preferences, and characteristics of each user, and creating more tailored interactions that are more likely to achieve meaningful interactions. This can be particularly important in applications such as personalised tutoring, and coaching, where the effectiveness of the interaction depends on the ability of the system to adapt to the individual needs of each user. Conversational systems offer a way to achieve this by enabling natural language interaction, which is a more intuitive and familiar way for humans to communicate.

Human-Robot Interaction is a complex and multidisciplinary field that requires expertise from multiple domains, including robotics, artificial intelligence, psychology, and human factors. Conversational systems bring together many of these domains and represent a challenging and rewarding area of research that can help advance the state of the art in HRI. Conversational systems for HRI have the potential to transform many areas of society, including healthcare, education and entertainment. Conversational systems can make robots more engaging, usable, and effective in these domains, leading to improved outcomes and quality of life for individuals and society as a whole.

The aim of this special issue is to bring together novel research work in the area of dialogue systems that are designed to enhance/support Human-Robot Interaction (HRI). In the active research area of HRI, the primary goal is to develop robotic agents that exhibit socially intelligent behaviour when interacting with human partners. Despite the clear relationship between social intelligence and fluent, flexible linguistic interaction, in practice interactive robots have only recently begun to use anything beyond a simple dialogue manager and template-based response generation process. This means that robot systems cannot take advantage of the flexibility offered by dialogue systems and NLG when managing conversations between humans and robots in dynamic environments, or when the conversation needs to be adapted in different contexts or multiple target languages.

This special issue aims to provide a forum for researchers and practitioners to share their latest research results, exchange ideas, and discuss the opportunities and challenges in developing conversational systems for human-robot interaction. We hope that this special issue will help to advance the state of the art in the field and inspire further research and development in this exciting area.

Topics of interest:
  • Design and evaluation of conversational systems for human-robot interaction
  • Natural language understanding and generation for human-robot interaction
  • Situated dialogue with robots
  • Contextualization and personalization in conversational systems
  • Emotional and social intelligence in conversational systems
  • Multimodal interaction and fusion of sensory data in conversational systems
  • Ethics, privacy, and security issues in conversational systems for human-robot interaction
  • User studies and user experience evaluation of conversational systems for human-robot interaction
  • Applications of conversational systems in healthcare, education, and entertainment

We invite papers presenting original work, as well as survey papers or substantial opinion papers. All submissions will be peer-reviewed according to the journal's standard guidelines. Manuscripts should be submitted online via the journal's website, referencing the title of the special issue and following the journal's formatting guidelines.

Timetable

Deadline: 1 October 2023
Reviewing period: 15 September–15 December 2023
First Decisions: 30 January 2024
Resubmissions: 1 March 2024
Final decisions: 15 April 2024
Camera-ready: 15 May 2024

Guest Editors

Dimitra Gkatzia, Edinburgh Napier University, UK – d.gkatzia@napier.ac.uk
Carl Strathearn, Edinburgh Napier University, UK – c.strathearn@napier.ac.uk
Mary-Ellen Foster, University of Glasgow, UK – maryellen.foster@glasgow.ac.uk
Hendrik Buschmeier, Bielefeld University, Germany –  hbuschme@uni-bielefeld.de

Relevant references

Laurence Devillers, Tatsuya Kawahara, Roger K. Moore, and Matthias Scheutz (2020). Spoken Language Interaction with Virtual Agents and Robots (SLIVAR): Towards Effective and Ethical Interaction (Dagstuhl Seminar 20021). In Dagstuhl Reports, Volume 10, Issue 1, pp. 1-51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik.

Pierre Lison & Casey Kennington(2023). Who’s in Charge? Roles and Responsibilities of Decision-Making Components in Conversational Robots. In: HRI 2023 Workshop on Human-Robot Conversational Interaction. http://arxiv.org/abs/2303.08470

Kristiina Jokinen. 2022. Conversational Agents and Robot Interaction. In HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 280–292. https://doi.org/10.1007/978-3-031-17618-0_21

Mary Ellen Foster. 2019. Natural language generation for social robotics: opportunities and challenges. Philosophical Transactions of the Royal Society B, 2019

Dimosthenis Kontogiorgos, Andre Pereira, Boran Sahindal, Sanne van Waveren, Joakim Gustafson. 2020. Behavioural Responses to Robot Conversational Failures. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3319502.3374782

Gabriel Skantze, Turn-taking in Conversational Systems and Human-Robot Interaction: A Review, Computer Speech & Language, Volume 67, 2021, 101178 https://doi.org/10.1016/j.csl.2020.101178.
 
Top

7-11IEEE Open Journal of Signal Processing: long papers of ICASSP 2024.

 

 

The IEEE Open Journal of Signal Processing (OJ-SP) has introduced a Short Papers submission category and review track, with a limit of eight pages plus an additional page for references (8+1). This is intended as an alternative publication venue for authors who would like to present at ICASSP 2024, but who prefer Open Access or the longer paper format than the traditional ICASSP 4+1 format.

 

Short papers submitted to OJ-SP by the ICASSP 2024 submission deadline, 6 September 2023, and that are specifically indicated as being submitted to this review track, will receive an expedited review to ensure that a decision is available in time for them to be included in the ICASSP 2024 program.

 

Accepted manuscripts will be published in OJ-SP and included as part of the ICASSP 2024 program. For additional details of this OJ-SP/ICASSP partnership, visit the ICASSP 2024 website.

OJ-SP was launched in 2020 as a fully open-access publication of the IEEE Signal Processing Society, with a scope encompassing the full range of technical activities of the Society. Manuscripts submitted before the end of 2023, which includes all submissions to the alternative ICASSP review track, are eligible for a discounted Article Processing Charge (APC) of USD$995. For more about the APC discount, please visit the OJ-SP homepage.

 

The IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) is a flagship conference of the IEEE Signal Processing Society. ICASSP 2024 will be held in Seoul, Korea, from 14 April to 19 April 2024.

 

Important Dates

Paper submission deadline:             

September 6, 2023

 

Paper notification of acceptance:      

December 13, 2023 

Top

7-12Call for papers on Prosody in a New journal: Journal of Connected Speech

Submissions are invited for a new journal in the area of connected speech. To submit articles please go to https://journal.equinoxpub.com/JCS/about/submissions

 

The aim of the Journal of Connected Speech is to provide a platform for the study of connected speech in both its formal and functional aspects (from prosody to discourse analysis). The journal explores issues linked to transcription systems, instrumentation, and data collection methodology, as well as models within broadly functional, cognitive, and psycholinguistic approaches.

 

The journal launches in 2024. See https://journal.equinoxpub.com/index.php/JCS/index

 

If you have any queries, please contact me at m.j.ball@bangor.ac.uk

 

Martin J. Ball, DLitt, PhD, HonFRCSLT, FLSW

Athro er Anrhydedd,

Ysgol Iaith, Diwylliant a'r Celfyddydau,

Prifysgol Bangor, Cymru.

(Hefyd Athro Gwadd, Prifysgol Glynd?r Wrecsam)

 

Honorary Professor,

School of Arts, Culture and Language,

Bangor University, Wales.

(Also Visiting Professor, Wrexham Glynd?r University)

Top

7-13Journal on Multimodal User Interfaces (JMUI)

Dear researchers on multimodal interaction,

Hope this email finds you well.

On behalf of the editorial team of the Journal on Multimodal User Interfaces (JMUI),
we are quite happy to inform you that our journal just reached an 2022 Impact Factor of 2.9!

We have the pleasure to invite you to submit articles describing your research work on multimodal interaction to JMUI. The contribution can be in the form of original research articles, review articles, and short communications.

JMUI is a SCIE indexed journal which provides a platform for research and advancement in the field of multimodal interaction and interfaces. We are particularly interested in high-quality articles that explore different interactive modalities (e.g., gestures, speech, gaze, facial expressions, graphics), their modeling and  user-centric design, fusion, software architecture and usability in different interfaces (e.g. multimodal input, multimodal output, socially interactive agents) and application areas (e.g. education and training, health, users with special needs, mobile interaction). Please check the JMUI web site to read articles that we publish (https://www.springer.com/journal/12193).

Submitting your work to the Journal on Multimodal User Interfaces offers several advantages, including rigorous peer review by experts in the field, wide readership and visibility among researchers, and the opportunity to contribute to the advancement of this rapidly evolving domain.
Current average duration from submission to first review is approximately between 60 to 90 days.

To submit your manuscripts, please visit our online submission system at Editorial Manager (https://www.editorialmanager.com/jmui/). Should you require any further information, or have any specific questions, please do not hesitate to reach out to us.

Please also note that we welcome special issues on topics related to multimodal interactions and interfaces.

We eagerly look forward to receiving your valuable contributions.

Best regards,

Jean-Claude MARTIN
Professor in Computer Science
Université Paris-Saclay, LISN/CNRS
JMUI Editor-in-Chief

JMUI website: https://www.springer.com/journal/12193
JMUI Facebook page:https://www.facebook.com/jmmui/ (Follow us for updates!)
 

 
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA