ISCApad #303 |
Thursday, September 07, 2023 by Chris Wellekens |
7-1 | IEEE/ACM Transactions ASLP, TASLP Special Issue: Speech, and Language Technologies for Low-resources Languages
| ||||||||||||||
7-2 | CfP IEEE-JSTSP Special issue on Near-Field Signal Processing: Algorthms, Implementations and Applications
| ||||||||||||||
7-3 | CfP MDPI: special issue on Prosody and Immigration in Languages Dear Speech Prosody SIG members,
Rajiv Rao is inviting submissions for a special issue on Prosody and Immigration in Languages, a MDPI Journal. Please see https://www.mdpi.com/journal/languages/special_issues/DTB64LM303 for details.
Research on minority immigrant languages has gained significant traction in the last decade-plus, primarily due to a significant body of research on heritage languages (e.g., Montrul, 2015; Polinsky, 2018; Polinsky & Montrul, 2021; among many others). Developments in the phonetics and phonology of heritage languages have lagged behind those in other linguistic areas, but recent years have seen significant growth in work on sound systems as well (see, e.g., Chang, 2021; Rao, 2016, in press), especially in North America, thanks in large part to research on Spanish in the US (for an overview, see Rao, 2019) and studies based on the Heritage Language Variation and Change Corpus in Toronto (Nagy, 2011). However, within the fields of heritage (and, in general, minority immigrant language) phonetics and phonology, prosody remains relatively understudied, and within the realm of immigrant language prosody, we still know very little about issues such as cross-generational change, longitudinal outcomes, child versus adolescent versus adult data, older first-generation immigrants who have resided in the host country for multiple decades versus monolingual homeland speakers, the role of source input varieties, the influence of a wide range of social (level of education, age, gender, rural versus urban settings, etc.) and affective (e.g., attitudes, emotions, motivation) variables, speech rhythm, intonation across a variety of pragmatic contexts, variation in lexical tone, speakers of such languages outside of North America, and the effects of minority language prosody on local majority varieties (by no means is this an exhaustive list). The goal of this Special Issue is to fill existing holes in the literature on prosody by addressing the topics listed above (among other possibilities), while highlighting the need for increased comparisons between first-generation immigrants and homeland speakers, as well as a wider range of coverage of languages and geographies in general (e.g., Calhoun, 2015 versus Calhoun et al., in press for data based in Oceania). Finally, this special issue complements other ones hosted by Languages: Given that prosody is a key component of human communication (e.g., Gussenhoven & Chen, 2020) and that language and cultural contact caused by international movement are pervasive in many regions of the world, learning more about the interaction of these two concepts is important, not only to expand on the recent growth in heritage language sound systems, but also to gain a deeper understanding of the underpinnings of prosodic variation (for a recent contribution to this area, see Armstrong et al., 2022). We request that, prior to submitting a manuscript, interested authors initially submit a proposed title and an abstract of 400–600 words summarizing their intended contribution. Please send it to the Guest Editor (Rajiv Rao; rgrao@wisc.edu) or to the Languages Editorial Office (languages@mdpi.com). Abstracts will be reviewed by the Guest Editor for the purposes of ensuring proper fit within the scope of the Special Issue. Full manuscripts will undergo double-blind peer-review. Tentative completion schedule:
| ||||||||||||||
7-4 | Revue TAL Numero spécial: Explicabilité des modèles de traitement automatique des langues
| ||||||||||||||
7-5 | Special issue: Embodied Conversational Systems for Human-Robot Interaction in Dialogue & Discourse journal. We are delighted to announce the Special Issue on Embodied Conversational Systems for HRI in the Dialogue & Discourse journal.
Special Issue Title: Embodied Conversational Systems for Human-Robot Interaction
Submission Link: http://www.dialogue-and-discourse.org/index.shtml
Topic Area Conversational systems such as chatbots and virtual assistants have become increasingly popular in recent years. This technology has the potential to enhance Human-Robot Interaction (HRI) and improve the user experience. However, there are significant challenges in designing and implementing effective conversational systems for HRI that need to be addressed (cf. Devillers et al. 2020; Lison & Kennington 2023). This special issue aims to bring together researchers and practitioners to explore the opportunities and challenges in developing conversational systems for human-robot interaction. Conversational systems are an important component of human-robot interaction because they enable more natural and intuitive communication between humans and robots. By leveraging research in areas such as dialogue systems, natural language understanding, natural language generation and multi-modal interaction, robots can become more accessible, usable, and engaging. Conversational systems can enable robots to better understand and respond to human emotions and social cues. By analysing speech patterns, facial expressions, and other nonverbal cues, conversational systems can help robots to better understand human emotions and tailor their responses accordingly. This can help to create more engaging and satisfying interactions between humans and robots, which is important for applications such as healthcare, education, and entertainment. Conversational systems can also help to personalise interactions between humans and robots, by adapting to the individual needs, preferences, and characteristics of each user, and creating more tailored interactions that are more likely to achieve meaningful interactions. This can be particularly important in applications such as personalised tutoring, and coaching, where the effectiveness of the interaction depends on the ability of the system to adapt to the individual needs of each user. Conversational systems offer a way to achieve this by enabling natural language interaction, which is a more intuitive and familiar way for humans to communicate. Human-Robot Interaction is a complex and multidisciplinary field that requires expertise from multiple domains, including robotics, artificial intelligence, psychology, and human factors. Conversational systems bring together many of these domains and represent a challenging and rewarding area of research that can help advance the state of the art in HRI. Conversational systems for HRI have the potential to transform many areas of society, including healthcare, education and entertainment. Conversational systems can make robots more engaging, usable, and effective in these domains, leading to improved outcomes and quality of life for individuals and society as a whole. The aim of this special issue is to bring together novel research work in the area of dialogue systems that are designed to enhance/support Human-Robot Interaction (HRI). In the active research area of HRI, the primary goal is to develop robotic agents that exhibit socially intelligent behaviour when interacting with human partners. Despite the clear relationship between social intelligence and fluent, flexible linguistic interaction, in practice interactive robots have only recently begun to use anything beyond a simple dialogue manager and template-based response generation process. This means that robot systems cannot take advantage of the flexibility offered by dialogue systems and NLG when managing conversations between humans and robots in dynamic environments, or when the conversation needs to be adapted in different contexts or multiple target languages. This special issue aims to provide a forum for researchers and practitioners to share their latest research results, exchange ideas, and discuss the opportunities and challenges in developing conversational systems for human-robot interaction. We hope that this special issue will help to advance the state of the art in the field and inspire further research and development in this exciting area. Topics of interest:
We invite papers presenting original work, as well as survey papers or substantial opinion papers. All submissions will be peer-reviewed according to the journal's standard guidelines. Manuscripts should be submitted online via the journal's website, referencing the title of the special issue and following the journal's formatting guidelines. Timetable Deadline: 1 October 2023 Reviewing period: 15 September–15 December 2023 First Decisions: 30 January 2024 Resubmissions: 1 March 2024 Final decisions: 15 April 2024 Camera-ready: 15 May 2024 Guest Editors Dimitra Gkatzia, Edinburgh Napier University, UK – d.gkatzia@napier.ac.uk Carl Strathearn, Edinburgh Napier University, UK – c.strathearn@napier.ac.uk Mary-Ellen Foster, University of Glasgow, UK – maryellen.foster@glasgow.ac.uk Hendrik Buschmeier, Bielefeld University, Germany – hbuschme@uni-bielefeld.de Relevant references Laurence Devillers, Tatsuya Kawahara, Roger K. Moore, and Matthias Scheutz (2020). Spoken Language Interaction with Virtual Agents and Robots (SLIVAR): Towards Effective and Ethical Interaction (Dagstuhl Seminar 20021). In Dagstuhl Reports, Volume 10, Issue 1, pp. 1-51, Schloss Dagstuhl – Leibniz-Zentrum für Informatik. Pierre Lison & Casey Kennington(2023). Who’s in Charge? Roles and Responsibilities of Decision-Making Components in Conversational Robots. In: HRI 2023 Workshop on Human-Robot Conversational Interaction. http://arxiv.org/abs/2303.08470 Kristiina Jokinen. 2022. Conversational Agents and Robot Interaction. In HCI International 2022 - Late Breaking Papers. Multimodality in Advanced Interaction Environments: 24th International Conference on Human-Computer Interaction, HCII 2022, Virtual Event, June 26 – July 1, 2022, Proceedings. Springer-Verlag, Berlin, Heidelberg, 280–292. https://doi.org/10.1007/978-3-031-17618-0_21 Mary Ellen Foster. 2019. Natural language generation for social robotics: opportunities and challenges. Philosophical Transactions of the Royal Society B, 2019 Dimosthenis Kontogiorgos, Andre Pereira, Boran Sahindal, Sanne van Waveren, Joakim Gustafson. 2020. Behavioural Responses to Robot Conversational Failures. In Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction (HRI ’20), March 23–26, 2020, Cambridge, United Kingdom. ACM, New York, NY, USA, 10 pages. https://doi.org/10.1145/3319502.3374782 Gabriel Skantze, Turn-taking in Conversational Systems and Human-Robot Interaction: A Review, Computer Speech & Language, Volume 67, 2021, 101178 https://doi.org/10.1016/j.csl.2020.101178.
| ||||||||||||||
7-6 | IEEE Open Journal of Signal Processing: long papers of ICASSP 2024.
| ||||||||||||||
7-7 | Call for papers on Prosody in a New journal: Journal of Connected Speech Submissions are invited for a new journal in the area of connected speech. To submit articles please go to https://journal.equinoxpub.com/JCS/about/submissions
The aim of the Journal of Connected Speech is to provide a platform for the study of connected speech in both its formal and functional aspects (from prosody to discourse analysis). The journal explores issues linked to transcription systems, instrumentation, and data collection methodology, as well as models within broadly functional, cognitive, and psycholinguistic approaches.
The journal launches in 2024. See https://journal.equinoxpub.com/index.php/JCS/index
If you have any queries, please contact me at m.j.ball@bangor.ac.uk
Martin J. Ball, DLitt, PhD, HonFRCSLT, FLSW Athro er Anrhydedd, Ysgol Iaith, Diwylliant a'r Celfyddydau, Prifysgol Bangor, Cymru. (Hefyd Athro Gwadd, Prifysgol Glynd?r Wrecsam)
Honorary Professor, School of Arts, Culture and Language, Bangor University, Wales. (Also Visiting Professor, Wrexham Glynd?r University)
| ||||||||||||||
7-8 | Journal on Multimodal User Interfaces (JMUI) Dear researchers on multimodal interaction,
Hope this email finds you well. On behalf of the editorial team of the Journal on Multimodal User Interfaces (JMUI), we are quite happy to inform you that our journal just reached an 2022 Impact Factor of 2.9! We have the pleasure to invite you to submit articles describing your research work on multimodal interaction to JMUI. The contribution can be in the form of original research articles, review articles, and short communications. JMUI is a SCIE indexed journal which provides a platform for research and advancement in the field of multimodal interaction and interfaces. We are particularly interested in high-quality articles that explore different interactive modalities (e.g., gestures, speech, gaze, facial expressions, graphics), their modeling and user-centric design, fusion, software architecture and usability in different interfaces (e.g. multimodal input, multimodal output, socially interactive agents) and application areas (e.g. education and training, health, users with special needs, mobile interaction). Please check the JMUI web site to read articles that we publish (https://www.springer.com/journal/12193). Submitting your work to the Journal on Multimodal User Interfaces offers several advantages, including rigorous peer review by experts in the field, wide readership and visibility among researchers, and the opportunity to contribute to the advancement of this rapidly evolving domain. Current average duration from submission to first review is approximately between 60 to 90 days. To submit your manuscripts, please visit our online submission system at Editorial Manager (https://www.editorialmanager.com/jmui/). Should you require any further information, or have any specific questions, please do not hesitate to reach out to us. Please also note that we welcome special issues on topics related to multimodal interactions and interfaces. We eagerly look forward to receiving your valuable contributions. Best regards, Jean-Claude MARTIN Professor in Computer Science
Université Paris-Saclay, LISN/CNRS JMUI Editor-in-Chief JMUI website: https://www.springer.com/journal/12193 JMUI Facebook page:https://www.facebook.com/jmmui/ (Follow us for updates!)
|