ISCA - International Speech
Communication Association


ISCApad Archive  »  2024  »  ISCApad #317  »  Journals

ISCApad #317

Sunday, November 10, 2024 by Chris Wellekens

7 Journals
7-1Call for papers on Prosody in a New journal: Journal of Connected Speech

Submissions are invited for a new journal in the area of connected speech. To submit articles please go to https://journal.equinoxpub.com/JCS/about/submissions

 

The aim of the Journal of Connected Speech is to provide a platform for the study of connected speech in both its formal and functional aspects (from prosody to discourse analysis). The journal explores issues linked to transcription systems, instrumentation, and data collection methodology, as well as models within broadly functional, cognitive, and psycholinguistic approaches.

 

The journal launches in 2024. See https://journal.equinoxpub.com/index.php/JCS/index

 

If you have any queries, please contact me at m.j.ball@bangor.ac.uk

 

Martin J. Ball, DLitt, PhD, HonFRCSLT, FLSW

Athro er Anrhydedd,

Ysgol Iaith, Diwylliant a'r Celfyddydau,

Prifysgol Bangor, Cymru.

(Hefyd Athro Gwadd, Prifysgol Glynd?r Wrecsam)

 

Honorary Professor,

School of Arts, Culture and Language,

Bangor University, Wales.

(Also Visiting Professor, Wrexham Glynd?r University)

Top

7-2Journal on Multimodal User Interfaces (JMUI)

Dear researchers on multimodal interaction,

Hope this email finds you well.

On behalf of the editorial team of the Journal on Multimodal User Interfaces (JMUI),
we are quite happy to inform you that our journal just reached an 2022 Impact Factor of 2.9!

We have the pleasure to invite you to submit articles describing your research work on multimodal interaction to JMUI. The contribution can be in the form of original research articles, review articles, and short communications.

JMUI is a SCIE indexed journal which provides a platform for research and advancement in the field of multimodal interaction and interfaces. We are particularly interested in high-quality articles that explore different interactive modalities (e.g., gestures, speech, gaze, facial expressions, graphics), their modeling and  user-centric design, fusion, software architecture and usability in different interfaces (e.g. multimodal input, multimodal output, socially interactive agents) and application areas (e.g. education and training, health, users with special needs, mobile interaction). Please check the JMUI web site to read articles that we publish (https://www.springer.com/journal/12193).

Submitting your work to the Journal on Multimodal User Interfaces offers several advantages, including rigorous peer review by experts in the field, wide readership and visibility among researchers, and the opportunity to contribute to the advancement of this rapidly evolving domain.
Current average duration from submission to first review is approximately between 60 to 90 days.

To submit your manuscripts, please visit our online submission system at Editorial Manager (https://www.editorialmanager.com/jmui/). Should you require any further information, or have any specific questions, please do not hesitate to reach out to us.

Please also note that we welcome special issues on topics related to multimodal interactions and interfaces.

We eagerly look forward to receiving your valuable contributions.

Best regards,

Jean-Claude MARTIN
Professor in Computer Science
Université Paris-Saclay, LISN/CNRS
JMUI Editor-in-Chief

JMUI website: https://www.springer.com/journal/12193
JMUI Facebook page:https://www.facebook.com/jmmui/ (Follow us for updates!)
 

 
Top

7-3Cf Contribution to a book on interactions in the production, acoustics, and perception of speech and music, DeGruyter Mouton Publishers.
Dear colleagues!

Our special session on the relationships between speech and music at the ICPhS in Prague aroused great interest and received a very positive, lasting response. We are therefore very pleased that we can now build on this positive feedback and invite scholars to submit chapter proposals for an edited collection on the intersections and interactions in the production, acoustics, and perception of speech and music – to be published with DeGruyter Mouton in early 2025.

Please find the complete Call for Papers with all details in topics and timeline under the following link: https://cloud.newsletter.degruyter.com/Speech%20Music  – or directly via EasyChair: https://easychair.org/cfp/SLM2023 

Submission Guidelines

1.    Abstract submission
Please submit a 500-word abstract by February 4th, 2024. In your abstract, please clearly state how your work relates to one or more of the above areas of interest as this will help us structure the volume and invite matching reviewers. All abstracts must be in English. Notification of acceptance of your abstract will be sent by February 11th, 2024.

2.    Full paper submission
Upon acceptance of your abstract, you are required to submit your full paper by June 30th, 2024 (approx. 8000 words, excluding references). To ensure the scientific quality of the volume, all submitted papers will undergo a thorough peer review process. Each manuscript will be reviewed by one of the volume editors and an external reviewer, likely chosen from the pool of contributing authors. The review will focus on assessing relevance, originality, clarity, adherence to the thematic scope, scientific rigor, contribution to the field, methodology, and overall scientific quality. Authors will be given the opportunity to revise their papers in response to the reviewers’ feedback.

We look forward to receiving your contributions, and in the meantime we wish you a happy and healthy pre-Christmas time,

Jianjing Kuang, University of Pennsylvania
Oliver Niebuhr, University of Southern Denmark
(Co-editors)
***************************************

Top

7-4TISMIR Special Collection on Multi-Modal Music Information Retrieval

TISMIR Special Collection on Multi-Modal Music Information Retrieval.

we have been delighted with the response to this collection and due to numerous requests for
additional time, we are extending the deadline for all submissions to the 1st of September to
allow a bit more time for teams to polish off their manuscripts and ensure high quality submissions
for this collection.

Extended Deadline for Submissions

01.09.2024

Scope of the Special Collection
Data related to and associated with music can be retrieved from a variety of sources or modalities:
audio tracks; digital scores; lyrics; video clips and concert recordings; artist photos and album covers;
expert annotations and reviews; listener social tags from the Internet; and so on. Essentially, the ways
humans deal with music are very diverse: we listen to it, read reviews, ask friends for
recommendations, enjoy visual performances during concerts, dance and perform rituals, play
musical instruments, or rearrange scores.

As such, it is hardly surprising that we have discovered multi-modal data to be so effective in a range
of technical tasks that model human experience and expertise. Former studies have already
confirmed that music classification scenarios may significantly benefit when several modalities are
taken into account. Other works focused on cross-modal analysis, e.g., generating a missing modality
from existing ones or aligning the information between different modalities.

The current upswing of disruptive artificial intelligence technologies, deep learning, and big data
analytics is quickly changing the world we are living in, and inevitably impacts MIR research as well.
Facilitating the ability to learn from very diverse data sources by means of these powerful approaches
may not only bring the solutions to related applications to new levels of quality, robustness, and
efficiency, but will also help to demonstrate and enhance the breadth and interconnected nature of
music science research and the understanding of relationships between different kinds of musical
data.

In this special collection, we invite papers on multi-modal systems in all their diversity. We particularly
encourage under-explored repertoire, new connections between fields, and novel research areas.
Contributions consisting of pure algorithmic improvements, empirical studies, theoretical discussions,
surveys, guidelines for future research, and introductions of new data sets are all welcome, as the
special collection will not only address multi-modal MIR, but also cover multi-perspective ideas,
developments, and opinions from diverse scientific communities.

Sample Possible Topics
● State-of-the-art music classification or regression systems which are based on several
modalities
● Deeper analysis of correlation between distinct modalities and features derived from them
● Presentation of new multi-modal data sets, including the possibility of formal analysis and
theoretical discussion of practices for constructing better data sets in future
● Cross-modal analysis, e.g., with the goal of predicting a modality from another one
● Creative and generative AI systems which produce multiple modalities
● Explicit analysis of individual drawbacks and advantages of modalities for specific MIR tasks
● Approaches for training set selection and augmentation techniques for multi-modal classifier
systems
● Applying transfer learning, large language models, and neural architecture search to
multi-modal contexts
● Multi-modal perception, cognition, or neuroscience research
● Multi-objective evaluation of multi-modal MIR systems, e.g., not only focusing on the quality,
but also on robustness, interpretability, or reduction of the environmental impact during the
training of deep neural networks

Guest Editors
● Igor Vatolkin (lead) - Akademischer Rat (Assistant Professor) at the Department of Computer
Science, RWTH Aachen University, Germany
● Mark Gotham - Assistant professor at the Department of Computer Science, Durham
University, UK
● Xiao Hu - Associated professor at the University of Hong Kong
● Cory McKay - Professor of music and humanities at Marianopolis College, Canada
● Rui Pedro Paiva - Professor at the Department of Informatics Engineering of the University of
Coimbra, Portugal

Submission Guidelines
Please, submit through https://transactions.ismir.net, and note in your cover letter that your paper is
intended to be part of this Special Collection on Multi-Modal MIR.
Submissions should adhere to formatting guidelines of the TISMIR journal:
https://transactions.ismir.net/about/submissions/. Specifically, articles must not be longer than
8,000 words in length, including referencing, citation and notes.

Please also note that if the paper extends or combines the authors' previously published research, it
is expected that there is a significant novel contribution in the submission (as a rule of thumb, we
would expect at least 50% of the underlying work - the ideas, concepts, methods, results, analysis and
discussion - to be new).

In case you are considering submitting to this special issue, it would greatly help our planning if you
let us know by replying to igor.vatolkin@rwth-aachen.de.

Kind regards,
Igor Vatolkin
on behalf of the TISMIR editorial board and the guest editors

Top

7-5ACM Transactions on Intelligent Systems and Technology: Special Issue on Transformers

CALL FOR PAPERS

ACM Transactions on Intelligent Systems and Technology
Special Issue on Transformers

https://dl.acm.org/pb-assets/static_journal_pages/tist/pdf/ACM-TIST-CFP-SI-Transformers-1719857985893.pdf

Editor-in-Chief: Huan Liu, Arizona State University, USA

Guest Editors:
• Feng Xia, RMIT University, Australia
• Tyler Derr, Vanderbilt University, USA
• Luu Anh Tuan, Nanyang Technological University, Singapore
• Richa Singh, IIT Jodhpur, India
• Aline Villavicencio, University of Exeter, United Kingdom

Transformer-based models have emerged as a cornerstone of modern artificial intelligence (AI), reshaping the landscape of machine learning and driving unprecedented progress in a myriad of tasks. Originating from the domain of natural language processing, transformers have transcended their initial applications to become ubiquitous across diverse fields including anomaly detection, computer vision, speech recognition, recommender systems, question answering, robotics, healthcare, education, and more. The impact of transformer models extends far beyond their technical intricacies. For instance, advanced transformers have been successfully applied to multimodal learning tasks, where they can seamlessly integrate information from different modalities such as text, images, audio, and video. This ability opens up new avenues for research in areas like visual question answering, image captioning, and video understanding.

Despite their remarkable success, however, several challenges remain. For example, training large transformer models often requires significant computational resources. Researchers are actively exploring efficient training methods, such as pre-training on massive datasets and knowledge distillation techniques, to address these limitations. Additionally, fostering explainability in transformer models is crucial for understanding their decision-making processes and building trust in real-world applications.

As transformers continue to evolve and permeate various sectors of AI, it becomes increasingly imperative to explore their advancements and applications comprehensively. This special issue seeks to provide a platform for researchers to showcase the latest developments, challenges, and opportunities in the field of transformers across diverse domains, fostering interdisciplinary dialogue and innovation.

Topics
This special issue invites contributions covering a wide range of topics related to advances in transformers. Topics of interest include, but are not limited to:
• Novel architectures and variations of transformer models
• Theoretical insights into transformers
• Efficient training and deployment of large-scale transformer models
• Fine-tuning strategies for pre-trained transformer models
• Interpretability and explainability of transformers
• Trustworthy, safe, and responsible transformers
• Transformers for diverse machine learning tasks
• Transformers for science
• Transformer-based approaches for multimodal learning
• Transformer foundation models and transformer-based generative AI
• Applications of transformers in various domains such as healthcare, education, robotics, etc.
• Ethical considerations and societal impacts of transformer technology

Important Dates
• Submissions deadline: December 1, 2024
• Tentative publication: September 2025

Submission Information
Submissions must be prepared according to the TIST submission guidelines (https://dl.acm.org/journal/tist/author-guidelines) and must be submitted via Manuscript Central (https://mc.manuscriptcentral.com/tist).
Early submissions are encouraged/preferred. We will start the review process as soon as we receive a submission.

For questions and further information, please contact Prof. Feng Xia (feng.xia@rmit.edu.auf.xia@ieee.org).

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA