ISCA - International Speech
Communication Association


ISCApad Archive  »  2023  »  ISCApad #296  »  Journals

ISCApad #296

Tuesday, February 07, 2023 by Chris Wellekens

7 Journals
7-1Special Issue at the IEEE Transactions on Multimedia:'Pre-trained Models for Multi-Modality Understanding'

Dear Colleagues,

We are organizing a Special Issue at the IEEE Transactions on Multimedia:
'Pre-trained Models for Multi-Modality Understanding'

Submission deadline: January 15, 2023
First Review: April 1, 2023
Revisions due: June 1, 2023
Second Review: August 15, 2023
Final Manuscripts: September 15, 2023
Publication date: September 30, 2023

For more details please visit our CFP website at:
https://signalprocessingsociety.org/sites/default/files/uploads/special_issues_deadlines/TMM_SI_pre_trained.pdf
*********************************************************************************************************************************************************

Best regards,
Zakia Hammal

Back  Top

7-2CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
CfP Special Issue 'Sensor-Based Approaches to Understanding Human Behavior'
 
Open-Access Journal 'Sensors' (ISSN 1424-8220). Impact factor: 3.847
 
 
Deadline for manuscript submissions: 10 June 2023
 
Guest editor:
Oliver Niebuhr
Associate Professor of Communication Technology
Head of the Acoustics LAB
Centre for Industrial Electronics
University of Southern Denmark, Sonderborg
 
 
 
Motivation of the special issue
-------------------------------
Research is currently at a point at which the development of cheap and powerful sensor technology coincides with the increasingly complex multimodal analysis of human behavior. In other words, we are at a point where the application of existing sensor technology as well as the development of new sensor technology can significantly advance our understanding of human behavior. Our special issue takes up this momentum and is intended to bring together for the first time strongly cross-disciplinary research under one roof – so that the individual disciplines can inform, inspire, and stimulate each other. Accordingly, human behavior is meant in a very broad sense and supposed to include, besides speech behavior, the manifestations of emotions and moods, body language, body movements (including sports), pain, stress, and health issues related to behavior or behavior changes, behavior towards robots or machines in general, etc. Examples of relevant sensor technologies are EEG, EMG, SCR (skin-conductance response), Heart Rate, Breathing, EGG (Electroglottogram), tactile/pressure sensors, gyroscopes and/or accelerometers capturing body movements, VR/AR-related sensor technologies, pupillometry, etc. Besides empirical contributions addressing current questions, outlines of future trends, review articles, applications of AI to sensor technology, and presentations of new pieces of sensor technology are very welcome too.
 
Submission information
-----------------------
Manuscripts should be submitted online at www.mdpi.com by registering and logging in to the website. Once you are registered, go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement via the MDPI's website.
 
Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.
 
Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.
Back  Top

7-3Appel à contributions pour le numéro 39 (2023) de la revue TIPA

appel à contributions pour le numéro 39 (2023) de la revue TIPA intitulé :

 

Discours, littératie et littérature numérique : quels enjeux créatifs et didactiques ?

 

Appel : https://journals.openedition.org/tipa/6064

 

Délai de soumission : 1er février 2023

 

Back  Top

7-4Etudes créoles

Nous avons le grand plaisir de vous annoncer la parution d'Etudes créoles sur la plateforme de revues en ligne OpenEdition,  https://journals.openedition.org/etudescreoles.

La revue Études créoles publie des analyses linguistiques des langues créoles ainsi que de l’histoire, de l’anthropologie, des littératures et des cultures des mondes créoles.

Elle a été publiée en version papier à l'Université de Provence de 1978 à 2010. Depuis 2015, elle est éditée par le Laboratoire Parole et Langage (LPL), UMR du CNRS et d'Aix-Marseille Université, dans une version électronique en accès libre.

Après l'acceptation de sa candidature auprès d'OpenEdition et l'obtention d'une subvention du Fonds national pour la science ouverte en 2021, nous avons pu effectuer la transition vers OpenEdition. Grâce à son nouvel hébergement, la revue bénéficiera désormais d'un référencement optimal.

Nous vous souhaitons une bonne lecture et nous espérons avoir de nouvelles soumissions d'articles.

L'équipe de rédaction d'Etudes créoles

Back  Top

7-5CfP IEEE/ACM Transactions on ASLP, ACM/TASLP Special issue on Speech and Language Technologies for Low-re Languagessource

Call for Papers
IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)

TASLP Special Issue on

Speech and Language Technologies for Low-resource Languages

 
Speech and language processing is a multi-disciplinary research area that focuses on various aspects of natural language processing and computational linguistics. Speech and language technologies deal with the study of methods and tools to develop innovative paradigms for processing human languages (speech and writing) that can be recognized by machines. Thanks to the incredible advances in machine learning and artificial intelligence techniques that effectively interpret speech and textual sources. In general, speech technologies include a series of artificial intelligence algorithms that enables the computer system to produce, analyze, modify and respond to human speech and texts. It establishes a more natural interaction between the human and the computers and also the translation between all the human languages and effectively analyzes the text and speech. These techniques have significant applications in computational linguistics, natural language processing, computer science, mathematics, speech processing, machine learning, and acoustics. Another important application of this technology is a machine translation of the text and voice.
 
There exists a huge gap between speech and language processing in low-resource languages as they have lesser computational resources. With the ability to access the vast amount of computational sources from various digital sources, we can resolve numerous language processing problems in real-time with enhanced user experience and productivity measures. Speech and language processing technologies for low-resource languages are still in their infancy. Research in this stream will enhance the likelihood of these languages becoming an active part of our life, as their importance is paramount. Furthermore, the societal shift towards digital media along with spectacular advances in digital media along with processing power, computational storage, and software capabilities with a vision of transferring low-resource computing language resources into efficient computing models.

This special issue aims to explore the language and speech processing technologies to novel computational models for processing speech, text, and language. The novel and innovative solutions focus on content production, knowledge management, and natural communication of the low-resource languages.  We welcome researchers and practitioners working in speech and language processing to present their novel and innovative research contributions for this special section.
 

Topics of Interest


Topics of interest for this special issue include (but are not limited to):
  • Artificial intelligence assisted speech and language technologies for low-resource languages
  • Pragmatics for low resource languages
  • Emerging trends in knowledge representation for low resource languages
  • Machine translation for low resource language processing
  • Sentiment and statistical analysis for low resource languages
  • Automatic speech recognition and speech technology for low resource languages
  • Multimodal analysis for low resource languages
  • Information retrieval and extraction of low resource languages
  • Augment mining for low resource language processing
  • Text summarization and speech synthesis
  • Sentence-level semantics for speech recognition

Submission Guidelines


Manuscripts should be submitted through the Manuscript Central system

 

Important Dates

  • Manuscript Submissions due:  29 December 2022
  • Authors notification: 10 February 2023
  • Revised version submission: 15 April 2023
  • Final decision notification: 20 June 2023

 


 

 

Guest Editors

Back  Top

7-6CfP Special issue od Advanced Robotics on Multimodal Processing and Robotics for Dialogue Systems

Call for Papers]
Advanced Robotics Special Issue on
Multimodal Processing and Robotics for Dialogue Systems
Co-Editors:
Prof. David Traum (University of Southern California, USA)
Prof. Gabriel Skantze (KTH Royal Institute of Technology, Sweden)
Prof. Hiromitsu Nishizaki (University of Yamanashi, Japan)
Prof. Ryuichiro Higashinaka (Nagoya University, Japan)
Dr. Takashi Minato (RIKEN/ATR, Japan)
Prof. Takayuki Nagai (Osaka University, Japan)
               
Publication in Vol. 37, Issue 21 (Nov 2023)
SUBMISSION DEADLINE: 31 Jan  2023

In recent years, as seen in smart speakers such as Google Home and Amazon Alexa, there has been remarkable progress in spoken dialogue systems technology to converse with users with human-like utterances. In the future, such dialogue systems are expected to support our daily activities in various ways. However, dialogue in daily activities is more complex than that with smart speakers; even with current spoken dialogue technology, it is still difficult to maintain a successful dialogue in various situations. For example, in customer service through dialogue, it is necessary for operators to respond appropriately to the different ways of speaking and requests of various customers. In such cases, we humans can switch the speaking manner depending on the type of customer, and we can successfully perform the dialogue by not only using our voice but also our gaze and facial expressions.
This type of human-like interaction is far from possible with the existing spoken dialogue systems. Humanoid robots have the possibility to realize such an interaction, because they can recognize not only the user's voice but also facial expressions and gestures using various sensors, and can express themselves in various ways such as gestures and facial expressions using their bodies. Their many means of expressions have the potential to successfully continue dialogue in a manner different from conventional dialogue systems.
The combination of such robots and dialogue systems can greatly expand the possibilities of dialogue systems, while at the same time, providing a variety of new challenges. Various research and development efforts are currently underway to address these new challenges, including 'dialogue robot competition' at IROS2022.
In this special issue, we invite a wide range of papers on multimodal dialogue systems and dialogue robots, their applications, and fundamental research. Prospective contributed papers are invited to cover, but are not limited to, the following topics on multimodal dialogue systems and robots:
 
*Spoken dialogue processing
*Multimodal processing
*Speech recognition
*Text-to-speech
*Emotion recognition
*Motion generation
*Facial expression generation
*System architecture
*Natural language processing
*Knowledge representation
*Benchmarking
*Evaluation method
*Ethics
*Dialogue systems and robots for competition
Submission:
The full-length manuscript (either PDF file or MS word file) should be sent by 31st Jan 2023 to the office of Advanced Robotics, the Robotics Society of Japan through the on-line submission system of the journal (https://www.rsj.or.jp/AR/submission). Sample manuscript templates and detailed instructions for authors are available at the website of the journal.
Note that word count includes references. Captions and author bios are not included.
For special issues, longer papers can be accepted if the editors approve.
Please contact the editors before the submission if your manuscript exceeds the word limit.

Back  Top

7-7CfP ACM Trans.MCCA, Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation

ACM Transactions on Multimedia Computing, Communications, and Applications
Special Issue on Realistic Synthetic Data: Generation, Learning, Evaluation
Impact Factor 4.094
https://mc.manuscriptcentral.com/tomm
Submission deadline: 31 March 2023


*** CALL FOR PAPERS ***

[Guest Editors]
Bogdan Ionescu, Universitatea Politehnica din Bucuresti, România
Ioannis Patras, Queen Mary University of London, UK
Henning Muller, University of Applied Sciences Western Switzerland, Switzerland
Alberto Del Bimbo, Università degli Studi di Firenze, Italy

[Scope]
In the current context of Machine Learning (ML) and Deep Learning
(DL), data and especially high-quality data are central for ensuring
proper training of the networks. It is well known that DL models
require an important quantity of annotated data to be able to reach
their full potential. Annotating content for models is traditionally
made by human experts or at least by typical users, e.g., via
crowdsourcing. This is a tedious task that is time consuming and
expensive -- massive resources are required, content has to be curated
and so on. Moreover, there are specific domains where data
confidentiality makes this process even more challenging, e.g., in the
medical domain where patient data cannot be made publicly available,
easily.

With the advancement of neural generative models such as Generative
Adversarial Networks (GAN), or, recently diffusion models, a promising
way of solving or alleviating such problems that are associated with
the need for domain specific annotated data is to go toward realistic
synthetic data generation. These data are generated by learning
specific characteristics of different classes of target data. The
advantage is that these networks would allow for infinite variations
within those classes while producing realistic outcomes, typically
hard to distinguish from the real data. These data have no proprietary
or confidentiality restrictions and seem a viable solution to generate
new datasets or augment existing ones. Existing results show very
promising results for signal generation, images etc.

Nevertheless, there are some limitations that need to be overcome so
as to advance the field. For instance, how can one control/manipulate
the latent codes of GANs, or the diffusion process, so as to produce
in the output the desired classes and the desired variations like real
data? In many cases, results are not of high quality and selection
should be made by the user, which is like manual annotation. Bias may
intervene in the generation process due to the bias in the input
dataset. Are the networks trustworthy? Is the generated content
violating data privacy? In some cases one can predict based on a
generated image the actual data source used for training the network.
Would it be possible to train the networks to produce new classes and
learn causality of the data? How do we objectively assess the quality
of the generated data? These are just a few open research questions.

[Topics]
In this context, the special issue is seeking innovative algorithms
and approaches addressing the following topics (but is not limited
to):
- Synthetic data for various modalities, e.g., signals, images,
volumes, audio, etc.
- Controllable generation for learning from synthetic data.
- Transfer learning and generalization of models.
- Causality in data generation.
- Addressing bias, limitations, and trustworthiness in data generation.
- Evaluation measures/protocols and benchmarks to assess quality of
synthetic content.
- Open synthetic datasets and software tools.
- Ethical aspects of synthetic data.

[Important Dates]
- Submission deadline: 31 March 2023
- First-round review decisions: 30 June 2023
- Deadline for revised submissions: 31 July 2023
- Notification of final decisions: 30 September 2023
- Tentative publication: December 2023

[Submission Information]
Prospective authors are invited to submit their manuscripts
electronically through the ACM TOMM online submission system (see
https://mc.manuscriptcentral.com/tomm) while adhering strictly to the
journal guidelines (see https://tomm.acm.org/authors.cfm). For the
article type, please select the Special Issue denoted SI: Realistic
Synthetic Data: Generation, Learning, Evaluation.

Submitted manuscripts should not have been published previously, nor
be under consideration for publication elsewhere. If the submission is
an extended work of a previously published conference paper, please
include the original work and a cover letter describing the new
content and results that were added. According to ACM TOMM publication
policy, previously published conference papers can be eligible for
publication provided that at least 40% new material is included in the
journal version.

[Contact]
For questions and further information, please contact Bogdan Ionescu /
bogdan.ionescu@upb.ro.

[Acknowledgement]
The Special Issue is endorsed by the AI4Media 'A Centre of Excellence
delivering next generation AI Research and Training at the service of
Media, Society and Democracy' H2020 ICT-48-2020 project
https://www.ai4media.eu/.

On behalf of the Guest Editors,
Bogdan Ionescu
https://www.aimultimedialab.ro/

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA