ISCA - International Speech
Communication Association


ISCApad Archive  »  2024  »  ISCApad #315  »  Journals

ISCApad #315

Friday, September 13, 2024 by Chris Wellekens

7 Journals
7-1Special issue: Embodied Conversational Systems for Human-Robot Interaction in Dialogue & Discourse journal.
We are delighted to announce the Special Issue on Embodied Conversational Systems for HRI in the Dialogue & Discourse journal. 
 
Special Issue Title: Embodied Conversational Systems for Human-Robot Interaction
 

Topic Area

Conversational systems such as chatbots and virtual assistants have become increasingly popular in recent years. This technology has the potential to enhance Human-Robot Interaction (HRI) and improve the user experience. However, there are significant challenges in designing and implementing effective conversational systems for HRI that need to be addressed (cf. Devillers et al. 2020; Lison & Kennington 2023). This special issue aims to bring together researchers and practitioners to explore the opportunities and challenges in developing conversational systems for human-robot interaction.

Conversational systems are an important component of human-robot interaction because they enable more natural and intuitive communication between humans and robots. By leveraging research in areas such as dialogue systems, natural language understanding, natural language generation and multi-modal interaction, robots can become more accessible, usable, and engaging. Conversational systems can enable robots to better understand and respond to human emotions and social cues. By analysing speech patterns, facial expressions, and other nonverbal cues, conversational systems can help robots to better understand human emotions and tailor their responses accordingly. This can help to create more engaging and satisfying interactions between humans and robots, which is important for applications such as healthcare, education, and entertainment. Conversational systems can also help to personalise interactions between humans and robots, by adapting to the individual needs, preferences, and characteristics of each user, and creating more tailored interactions that are more likely to achieve meaningful interactions. This can be particularly important in applications such as personalised tutoring, and coaching, where the effectiveness of the interaction depends on the ability of the system to adapt to the individual needs of each user. Conversational systems offer a way to achieve this by enabling natural language interaction, which is a more intuitive and familiar way for humans to communicate.

Human-Robot Interaction is a complex and multidisciplinary field that requires expertise from multiple domains, including robotics, artificial intelligence, psychology, and human factors. Conversational systems bring together many of these domains and represent a challenging and rewarding area of research that can help advance the state of the art in HRI. Conversational systems for HRI have the potential to transform many areas of society, including healthcare, education and entertainment. Conversational systems can make robots more engaging, usable, and effective in these domains, leading to improved outcomes and quality of life for individuals and society as a whole.

The aim of this special issue is to bring together novel research work in the area of dialogue systems that are designed to enhance/support Human-Robot Interaction (HRI). In the active research area of HRI, the primary goal is to develop robotic agents that exhibit socially intelligent behaviour when interacting with human partners. Despite the clear relationship between social intelligence and fluent, flexible linguistic interaction, in practice interactive robots have only recently begun to use anything beyond a simple dialogue manager and template-based response generation process. This means that robot systems cannot take advantage of the flexibility offered by dialogue systems and NLG when managing conversations between humans and robots in dynamic environments, or when the conversation needs to be adapted in different contexts or multiple target languages.

This special issue aims to provide a forum for researchers and practitioners to share their latest research results, exchange ideas, and discuss the opportunities and challenges in developing conversational systems for human-robot interaction. We hope that this special issue will help to advance the state of the art in the field and inspire further research and development in this exciting area.

Topics of interest:
  • Design and evaluation of conversational systems for human-robot interaction
  • Natural language understanding and generation for human-robot interaction
  • Situated dialogue with robots
  • Contextualization and personalization in conversational systems
  • Emotional and social intelligence in conversational systems
  • Multimodal interaction and fusion of sensory data in conversational systems
  • Ethics, privacy, and security issues in conversational systems for human-robot interaction
  • User studies and user experience evaluation of conversational systems for human-robot interaction
  • Applications of conversational systems in healthcare, education, and entertainment

We invite papers presenting original work, as well as survey papers or substantial opinion papers. All submissions will be peer-reviewed according to the journal's standard guidelines. Manuscripts should be submitted online via the journal's website, referencing the title of the special issue and following the journal's formatting guidelines.

Timetable

Deadline: 1 October 2023
Reviewing period: 15 September–15 December 2023
First Decisions: 30 January 2024
Resubmissions: 1 March 2024
Final decisions: 15 April 2024
Camera-ready: 15 May 2024

Guest Editors

Dimitra Gkatzia, Edinburgh Napier University, UK – d.gkatzia@napier.ac.uk
Carl Strathearn, Edinburgh Napier University, UK – c.strathearn@napier.ac.uk
Mary-Ellen Foster, University of Glasgow, UK – maryellen.foster@glasgow.ac.uk
Hendrik Buschmeier, Bielefeld University, Germany –  hbuschme@uni-bielefeld.de

Relevant references

Laurence Devillers, Tatsuya Kawahara, Roger K. Moore, and Matthias Scheutz (2020). Spoken Language Interaction with Virtual Agents and Robots (SLIVAR): Towards Effective and Ethical Interaction (Dagstuhl Seminar 20021). In Dagstuhl Reports, Volume 10, Issue 1, pp. 1-51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik.

Pierre Lison & Casey Kennington(2023). Who’s in Charge? Roles and Responsibilities of Decision-Making Components in Conversational Robots. In: HRI 2023 Workshop on Human-Robot Conversational Interaction. http://arxiv.org/abs/2303.08470

Kristiina Jokinen. 2022. Conversational Agents and Robot Interaction. In HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 280–292. https://doi.org/10.1007/978-3-031-17618-0_21

Mary Ellen Foster. 2019. Natural language generation for social robotics: opportunities and challenges. Philosophical Transactions of the Royal Society B, 2019

Dimosthenis Kontogiorgos, Andre Pereira, Boran Sahindal, Sanne van Waveren, Joakim Gustafson. 2020. Behavioural Responses to Robot Conversational Failures. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3319502.3374782

Gabriel Skantze, Turn-taking in Conversational Systems and Human-Robot Interaction: A Review, Computer Speech & Language, Volume 67, 2021, 101178 https://doi.org/10.1016/j.csl.2020.101178.
 
Back  Top

7-2Call for papers on Prosody in a New journal: Journal of Connected Speech

Submissions are invited for a new journal in the area of connected speech. To submit articles please go to https://journal.equinoxpub.com/JCS/about/submissions

 

The aim of the Journal of Connected Speech is to provide a platform for the study of connected speech in both its formal and functional aspects (from prosody to discourse analysis). The journal explores issues linked to transcription systems, instrumentation, and data collection methodology, as well as models within broadly functional, cognitive, and psycholinguistic approaches.

 

The journal launches in 2024. See https://journal.equinoxpub.com/index.php/JCS/index

 

If you have any queries, please contact me at m.j.ball@bangor.ac.uk

 

Martin J. Ball, DLitt, PhD, HonFRCSLT, FLSW

Athro er Anrhydedd,

Ysgol Iaith, Diwylliant a'r Celfyddydau,

Prifysgol Bangor, Cymru.

(Hefyd Athro Gwadd, Prifysgol Glynd?r Wrecsam)

 

Honorary Professor,

School of Arts, Culture and Language,

Bangor University, Wales.

(Also Visiting Professor, Wrexham Glynd?r University)

Back  Top

7-3Journal on Multimodal User Interfaces (JMUI)

Dear researchers on multimodal interaction,

Hope this email finds you well.

On behalf of the editorial team of the Journal on Multimodal User Interfaces (JMUI),
we are quite happy to inform you that our journal just reached an 2022 Impact Factor of 2.9!

We have the pleasure to invite you to submit articles describing your research work on multimodal interaction to JMUI. The contribution can be in the form of original research articles, review articles, and short communications.

JMUI is a SCIE indexed journal which provides a platform for research and advancement in the field of multimodal interaction and interfaces. We are particularly interested in high-quality articles that explore different interactive modalities (e.g., gestures, speech, gaze, facial expressions, graphics), their modeling and  user-centric design, fusion, software architecture and usability in different interfaces (e.g. multimodal input, multimodal output, socially interactive agents) and application areas (e.g. education and training, health, users with special needs, mobile interaction). Please check the JMUI web site to read articles that we publish (https://www.springer.com/journal/12193).

Submitting your work to the Journal on Multimodal User Interfaces offers several advantages, including rigorous peer review by experts in the field, wide readership and visibility among researchers, and the opportunity to contribute to the advancement of this rapidly evolving domain.
Current average duration from submission to first review is approximately between 60 to 90 days.

To submit your manuscripts, please visit our online submission system at Editorial Manager (https://www.editorialmanager.com/jmui/). Should you require any further information, or have any specific questions, please do not hesitate to reach out to us.

Please also note that we welcome special issues on topics related to multimodal interactions and interfaces.

We eagerly look forward to receiving your valuable contributions.

Best regards,

Jean-Claude MARTIN
Professor in Computer Science
Université Paris-Saclay, LISN/CNRS
JMUI Editor-in-Chief

JMUI website: https://www.springer.com/journal/12193
JMUI Facebook page:https://www.facebook.com/jmmui/ (Follow us for updates!)
 

 
Back  Top

7-4Cf Contribution to a book on interactions in the production, acoustics, and perception of speech and music, DeGruyter Mouton Publishers.
Dear colleagues!

Our special session on the relationships between speech and music at the ICPhS in Prague aroused great interest and received a very positive, lasting response. We are therefore very pleased that we can now build on this positive feedback and invite scholars to submit chapter proposals for an edited collection on the intersections and interactions in the production, acoustics, and perception of speech and music – to be published with DeGruyter Mouton in early 2025.

Please find the complete Call for Papers with all details in topics and timeline under the following link: https://cloud.newsletter.degruyter.com/Speech%20Music  – or directly via EasyChair: https://easychair.org/cfp/SLM2023 

Submission Guidelines

1.    Abstract submission
Please submit a 500-word abstract by February 4th, 2024. In your abstract, please clearly state how your work relates to one or more of the above areas of interest as this will help us structure the volume and invite matching reviewers. All abstracts must be in English. Notification of acceptance of your abstract will be sent by February 11th, 2024.

2.    Full paper submission
Upon acceptance of your abstract, you are required to submit your full paper by June 30th, 2024 (approx. 8000 words, excluding references). To ensure the scientific quality of the volume, all submitted papers will undergo a thorough peer review process. Each manuscript will be reviewed by one of the volume editors and an external reviewer, likely chosen from the pool of contributing authors. The review will focus on assessing relevance, originality, clarity, adherence to the thematic scope, scientific rigor, contribution to the field, methodology, and overall scientific quality. Authors will be given the opportunity to revise their papers in response to the reviewers’ feedback.

We look forward to receiving your contributions, and in the meantime we wish you a happy and healthy pre-Christmas time,

Jianjing Kuang, University of Pennsylvania
Oliver Niebuhr, University of Southern Denmark
(Co-editors)
***************************************

Back  Top

7-5TISMIR Special Collection on Multi-Modal Music Information Retrieval

TISMIR Special Collection on Multi-Modal Music Information Retrieval.

we have been delighted with the response to this collection and due to numerous requests for
additional time, we are extending the deadline for all submissions to the 1st of September to
allow a bit more time for teams to polish off their manuscripts and ensure high quality submissions
for this collection.

Extended Deadline for Submissions

01.09.2024

Scope of the Special Collection
Data related to and associated with music can be retrieved from a variety of sources or modalities:
audio tracks; digital scores; lyrics; video clips and concert recordings; artist photos and album covers;
expert annotations and reviews; listener social tags from the Internet; and so on. Essentially, the ways
humans deal with music are very diverse: we listen to it, read reviews, ask friends for
recommendations, enjoy visual performances during concerts, dance and perform rituals, play
musical instruments, or rearrange scores.

As such, it is hardly surprising that we have discovered multi-modal data to be so effective in a range
of technical tasks that model human experience and expertise. Former studies have already
confirmed that music classification scenarios may significantly benefit when several modalities are
taken into account. Other works focused on cross-modal analysis, e.g., generating a missing modality
from existing ones or aligning the information between different modalities.

The current upswing of disruptive artificial intelligence technologies, deep learning, and big data
analytics is quickly changing the world we are living in, and inevitably impacts MIR research as well.
Facilitating the ability to learn from very diverse data sources by means of these powerful approaches
may not only bring the solutions to related applications to new levels of quality, robustness, and
efficiency, but will also help to demonstrate and enhance the breadth and interconnected nature of
music science research and the understanding of relationships between different kinds of musical
data.

In this special collection, we invite papers on multi-modal systems in all their diversity. We particularly
encourage under-explored repertoire, new connections between fields, and novel research areas.
Contributions consisting of pure algorithmic improvements, empirical studies, theoretical discussions,
surveys, guidelines for future research, and introductions of new data sets are all welcome, as the
special collection will not only address multi-modal MIR, but also cover multi-perspective ideas,
developments, and opinions from diverse scientific communities.

Sample Possible Topics
● State-of-the-art music classification or regression systems which are based on several
modalities
● Deeper analysis of correlation between distinct modalities and features derived from them
● Presentation of new multi-modal data sets, including the possibility of formal analysis and
theoretical discussion of practices for constructing better data sets in future
● Cross-modal analysis, e.g., with the goal of predicting a modality from another one
● Creative and generative AI systems which produce multiple modalities
● Explicit analysis of individual drawbacks and advantages of modalities for specific MIR tasks
● Approaches for training set selection and augmentation techniques for multi-modal classifier
systems
● Applying transfer learning, large language models, and neural architecture search to
multi-modal contexts
● Multi-modal perception, cognition, or neuroscience research
● Multi-objective evaluation of multi-modal MIR systems, e.g., not only focusing on the quality,
but also on robustness, interpretability, or reduction of the environmental impact during the
training of deep neural networks

Guest Editors
● Igor Vatolkin (lead) - Akademischer Rat (Assistant Professor) at the Department of Computer
Science, RWTH Aachen University, Germany
● Mark Gotham - Assistant professor at the Department of Computer Science, Durham
University, UK
● Xiao Hu - Associated professor at the University of Hong Kong
● Cory McKay - Professor of music and humanities at Marianopolis College, Canada
● Rui Pedro Paiva - Professor at the Department of Informatics Engineering of the University of
Coimbra, Portugal

Submission Guidelines
Please, submit through https://transactions.ismir.net, and note in your cover letter that your paper is
intended to be part of this Special Collection on Multi-Modal MIR.
Submissions should adhere to formatting guidelines of the TISMIR journal:
https://transactions.ismir.net/about/submissions/. Specifically, articles must not be longer than
8,000 words in length, including referencing, citation and notes.

Please also note that if the paper extends or combines the authors' previously published research, it
is expected that there is a significant novel contribution in the submission (as a rule of thumb, we
would expect at least 50% of the underlying work - the ideas, concepts, methods, results, analysis and
discussion - to be new).

In case you are considering submitting to this special issue, it would greatly help our planning if you
let us know by replying to igor.vatolkin@rwth-aachen.de.

Kind regards,
Igor Vatolkin
on behalf of the TISMIR editorial board and the guest editors

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA