ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #235  »  Journals

ISCApad #235

Wednesday, January 10, 2018 by Chris Wellekens

7 Journals
7-1CfP Special Issue of Speech Communication on *REALISM IN ROBUST SPEECH AND LANGUAGE PROCESSING*
Speech Communication
 
Special Issue on *REALISM IN ROBUST SPEECH AND LANGUAGE  PROCESSING*
 
*Deadline: May 31st, 2017*   (For further information see attached)
 
How can you be sure that your research has actual impact in real-world applications? This is one of the major challenges currently faced in many areas of speech processing, with the migration of laboratory solutions to real-world applications, which is what we address by the term 'Realism'. Real application scenarios involve several acoustic, speaker and language variabilities which challenge the robustness of systems. As early evaluations in practical targeted scenarios are hardly feasible, many developments are actually based on simulated data, which leaves concerns for the viability of these solutions in real-world environments.
 
Information about which conditions are required for a dataset to be realistic and experimental evidence about which ones are actually important for the evaluation of a certain task is sparsely found in the literature. Motivated by the growing importance of robustness in commercial speech and language processing applications, this Special Issue aims to provide a venue for research advancements, recommendations for best practices, and tutorial-like papers about realism in robust speech and language processing.
 
Prospective authors are invited to submit original papers in areas related to the problem of realism in robust speech and language processing, including: speech enhancement, automatic speech, speaker and language recognition, language modeling, speech synthesis and perception, affective speech processing, paralinguistics, etc. Contributions may include, but are not limited to:
 
 -   Position papers from researchers or practitioners for best practice recommendations and advice regarding different kinds of real and simulated setups for a given task
 -   Objective experimental characterization of real scenarios in terms of acoustic conditions (reverberation, noise, sensor variability, source/sensor movement, environment change, etc)
 -   Objective experimental characterization of real scenarios in terms of speech characteristics (spontaneous speech, number of speakers, vocal effort, effect of age, non-neutral speech, etc)
 -   Objective experimental characterization of real scenarios in terms of language variability 
 -   Real data collection protocols
 -   Data simulation algorithms
 -   New datasets suitable for research on robust speech processing
 -   Performance comparison on real vs. simulated datasets for a given task and a range of methods
 -   Analysis of advantages vs. weaknesses of simulated and/or real data, and techniques for addressing these weaknesses
 
Papers written by practitioners and industry researchers are especially welcomed. If there is any doubt about the suitability of your paper for this special issue, please contact us before submission.
 
 
*Submission instructions: *
Manuscript submissions shall be made through EVISE at https://www.evise.com/profile/#/SPECOM/login
Select article type 'SI:Realism Speech Processing'
 
 
*Important dates: *
March 1, 2017: Submission portal open 
May 31, 2017: Paper submission
September 30, 2017: First review
November 30, 2017: Revised submission
April 30, 2018: Completion of revision process
 
 
*Guest Editors: *
Dayana Ribas, CENATAV, Cuba
Emmanuel Vincent, Inria, France
John Hansen, UTDallas, USA
Top

7-2CfP IEEE Journal of Selected Topics in Signal Processing: Special Issue on End-to-End Speech and Language Processing

CALL FOR PAPERS

IEEE Journal of Selected Topics in Signal Processing

Special Issue on End-to-End Speech and Language Processing

End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid Hidden Markov-deep neural

network model-based automatic speech recognition (ASR) systems. Such E2E systems are attractive because they do not require

initial alignments between input acoustic features and output graphemes or words. Very deep convolutional networks and recurrent

neural networks have also been very successful in ASR systems due to their added expressive power and better generalization.

ASR is often not the end goal of real-world speech information processing systems. Instead, an important end goal is information

retrieval, in particular keyword search (KWS), that involves retrieving speech documents containing a user-specified query from a

large database. Conventional keyword search uses an ASR system as a front-end that converts the speech database into a finitestate

transducer (FST) index containing a large number of likely word or sub-word sequences for each speech segment, along with

associated confidence scores and time stamps. A user-specified text query is then composed with this FST index to find the

putative locations of the keyword along with confidence scores. More recently, inspired by E2E approaches, ASR-free keyword

search systems have been proposed with limited success. Machine learning methods have also been very successful in Question-

Answering, parsing, language translation, analytics and deriving representations of morphological units, words or sentences.

Challenges such as the Zero Resource Speech Challenge aim to construct systems that learn an end-to-end Spoken Dialog (SD)

system, in an unknown language, from scratch, using only information available to a language learning infant (zero linguistic

resources). The principal objective of the recently concluded IARPA Babel program was to develop a keyword search system that

delivers high accuracy for any new language given very limited transcribed speech, noisy acoustic and channel conditions, and

limited system build time of one to four weeks. This special issue will showcase the power of novel machine learning methods not

only for ASR, but for keyword search and for the general processing of speech and language.

Topics of interest in the special issue include (but are not limited to):

Novel end-to-end speech and language processing

Query-by-example search

Deep learning based acoustic and word representations

Query-by-example search

Question answering systems

Multilingual dialogue systems

Multilingual representation learning

Low and zero resource speech processing

Deep learning based ASR-free keyword search

Deep learning based media retrieval

Kernel methods applied to speech and language processing

Acoustic unit discovery

Computational challenges for deep end-to-end systems

Adaptation strategies for end to end systems

Noise robustness for low resource speech recognition systems

Spoken language processing: speech to speech translation,

speech retrieval, extraction, and summarization

Machine learning methods applied to morphological,

syntactic, and pragmatic analysis

Computational semantics: document analysis, topic

segmentation, categorization, and modeling

Named entity recognition, tagging, chunking, and parsing

Sentiment analysis, opinion mining, and social media analytics

Deep learning in human computer interaction

Dates:

Manuscript submission: April 1, 2017

First review completed: June 1, 2017

Revised Manuscript Due: July 15, 2017

Second Review Completed: August 15, 2017

Final Manuscript Due: September 15, 2017

Publication: December. 2017

Guest Editors:

Nancy F. Chen, Institute for Infocomm Research (I2R), A*STAR, Singapore

Mary Harper, Army Research Laboratory, USA

Brian Kingsbury, IBM Watson, IBM T.J. Watson Research Center, USA

Kate Knill, Cambridge University, U.K.

Bhuvana Ramabhadran, IBM Watson, IBM T.J. Watson Research Center, USA

Top

7-3Travaux Interdisciplinaires sur la Parole et le Langage, TIPA

L'équipe de rédaction des TIPA a le plaisir de vous annoncer la parution du dernier numéro de la revue sur Revues.org :

Travaux Interdisciplinaires sur la Parole et le Langage, TIPA n° 32 I 2016 :
Conflit en discours et discours en conflit : approches linguistiques et communicatives

sous la direction de  Tsuyoshi Kida et Laura-Anca Parepa
http://tipa.revues.org

Ce numéro sera complété par le n° 33 I 2017 qui portera sur la même thématique :
Conflit en discours et discours en conflit : approches interdisciplinaires

sous la direction de Laura-Anca Parepa et de Tsuyoshi Kida

Top

7-4CfP IEEE Journal of Selected Topics in Signal Processing/ Special Issue on End-to-End Speech and Language Processing

Call for Papers

IEEE Journal of Selected Topics in Signal Processing
Special Issue on End-to-End Speech and Language Processing

End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid Hidden Markov-deep neural network model-based automatic speech recognition (ASR) systems. Such E2E systems are attractive because they do not require initial alignments between input acoustic features and output graphemes or words. Very deep convolutional networks and recurrent neural networks have also been very successful in ASR systems due to their added expressive power and better generalization. ASR is often not the end goal of real-world speech information processing systems. Instead, an important end goal is information retrieval, in particular keyword search (KWS), which involves retrieving speech documents containing a user-specified query from a large database. Conventional keyword search uses an ASR system as a front-end that converts the speech database into a finite-state transducer (FST) index containing a large number of likely word or sub-word sequences for each speech segment, along with associated confidence scores and time stamps. A user-specified text query is then composed with this FST index to find the putative locations of the keyword along with confidence scores. More recently, inspired by E2E approaches, ASR-free keyword search systems have been proposed with limited success. Machine learning methods have also been very successful in Question-Answering, parsing, language translation, analytics and deriving representations of morphological units, words or sentences. Challenges such as the Zero Resource Speech Challenge aim to construct systems that learn an end-to-end Spoken Dialog (SD) system, in an unknown language, from scratch, using only information available to a language learning infant (zero linguistic resources). The principal objective of the recently concluded IARPA Babel program was to develop a keyword search system that delivers high accuracy for any new language given very limited transcribed speech, noisy acoustic and channel conditions, and limited system build time of one to four weeks. This special issue will showcase the power of novel machine learning methods not only for ASR, but for keyword search and for the general processing of speech and language.

Topics of interest in the special issue include (but are not limited to):

  • Novel end-to-end speech and language processing
  • Deep learning based acoustic and word representations
  • Query-by-example search
  • Question answering systems
  • Multilingual dialogue systems
  • Multilingual representation learning
  • Low and zero resource speech processing
  • Deep learning based ASR-free keyword search
  • Deep learning based media retrieval
  • Kernel methods applied to speech and language processing
  • Acoustic unit discovery
  • Computational challenges for deep end-to-end systems
  • Adaptation strategies for end to end systems
  • Noise robustness for low resource speech recognition systems
  • Spoken language processing: speech retrieval, speech to speech translation, extraction, and summarization
  • Machine learning methods applied to morphological, syntactic, and pragmatic analysis
  • Computational semantics: document analysis, topic segmentation, categorization, and modeling
  • Named entity recognition, tagging, chunking, and parsing
  • Sentiment analysis, opinion mining, and social media analytics
  • Deep learning in human computer interaction

Prospective authors should follow the instructions given on the IEEE JSTSP webpages: https://signalprocessingsociety.org/publications-resources/ieee-journal-selected-topics-signal-processing, and submit their manuscript with the web submission system at: https://mc.manuscriptcentral.com/jstsp-ieee.

Important Dates:
- Manuscript submission: April 1, 2017
- First review completed: June 1, 2017
- Revised Manuscript Due: July 15, 2017
- Second Review Completed: August 15, 2017
- Final Manuscript Due: September 15, 2017
- Publication: December 2017

Guest Editors:
- Nancy F. Chen, Institute for Infocomm Research (I2R), A*STAR, Singapore
- Mary Harper, Army Research Laboratory, USA
- Brian Kingsbury, IBM Watson, IBM T.J. Watson Research Center, USA
- Kate Knill, Cambridge University, U.K.
- Bhuvana Ramabhadran, IBM Watson, IBM T.J. Watson Research Center, USA

Top

7-5La langue des signes, c'est comme ça. Revue TIPA

Revue TIPA n°34, 2018
 
Travaux interdisciplinaires sur la parole et le langage

 http://tipa.revues.org/



LA LANGUE DES SIGNES, C?EST COMME ÇA



 

 

Langue des signes : état des lieux, description, formalisation, usages

 

Éditrice invitée
Mélanie Hamm,

Laboratoire Parole et Langage, Aix-Marseille Université


 

« La langue des signes, c?est comme ça » fait référence à l?ouvrage « Les sourds, c?est comme ça » d?Yves Delaporte (2002). Dans ce livre, le monde des sourds, la langue des signes française et ses spécificités sont décrits. Une des particularités de la langue des signes française est le geste spécifique signifiant COMME ÇA[1], expression fréquente chez les sourds, manifestant une certaine distance, respectueuse et sans jugement, vis-à-vis de ce qui nous entoure. C?est avec ce même regard ? proche de la probité scientifique simple et précise ? que nous tenterons d?approcher les langues signées.

 

Même si nous assistons à des avancées en linguistique des langues signées en général et de la langue des signes française en particulier, notamment depuis les travaux de Christian Cuxac (1983), de Harlan Lane (1991) et de Susan D. Fischer (2008), la linguistique des langues des signes reste un domaine encore peu développé. De plus, la langue des signes française est une langue en danger, menacée de disparition (Moseley, 2010 et Unesco, 2011). Mais quelle est cette langue ? Comment la définir ? Quels sont ses « mécanismes » ? Quelle est sa structure ? Comment la « considérer », sous quel angle, à partir de quelles approches ? Cette langue silencieuse met à mal un certain nombre de postulats linguistiques, comme l?universalité du phonème, et pose de nombreuses questions auxquelles il n?y a pas encore de réponses satisfaisantes. En quoi est-elle similaire et différente des langues orales ? N?appartient-elle qu?aux locuteurs sourds ? Doit-elle être étudiée, partagée, conservée, documentée comme toute langue qui appartient au patrimoine immatériel de l?humanité (Unesco, 2003) ? Comment l?enseigner et avec quels moyens ? Que raconte l?histoire à ce sujet ? Quel avenir pour les langues signées ? Que disent les premiers intéressés ? Une somme de questions ouvertes et très contemporaines?

 

Le numéro 34 de la revue Travaux Interdisciplinaires sur la Parole et le langage se propose de faire le point sur l?état de la recherche et des différents travaux relatifs à cette langue si singulière, en évitant de l?« enfermer » dans une seule discipline. Nous sommes à la recherche d?articles inédits sur les langues des signes et sur la langue des signes française en particulier. Ils proposeront description, formalisation ou encore aperçu des usages des langues signées. Une approche comparatiste entre les différentes langues des signes, des réflexions sur les variantes et les variations, des considérations sociolinguistiques, sémantiques et structurelles, une analyse de l?étymologie des signes pourront également faire l?objet d?articles. En outre, un espace sera réservé pour d?éventuels témoignages de sourds signeurs.

 

Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteurs » sur https://tipa.revues.org/222). Une longueur entre 10 et 20 pages est souhaitée pour chacun des articles, soit environ 35 000 / 80 000 caractères ou 6 000 / 12 000 mots. La taille moyenne recommandée pour chacune des contributions est d?environ 15 pages. Les auteurs sont priés de fournir un résumé de l?article dans la langue de l?article (français ou anglais ; entre 120 et 200 mots) ainsi qu?un résumé long d?environ deux pages (dans l?autre langue : français si l?article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais). Les articles proposés doivent être sous format .doc (Word) et parvenir à la revue TIPA sous forme électronique aux adresses suivantes : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr.

                                                                       

 

Références bibliographiques 

COMPANYS, Monica (2007). Prêt à signer. Guide de conversation en LSF. Angers : Éditions Monica Companys.

CUXAC, Christian (1983). Le langage des sourds. Paris : Payot.

DELAPORTE, Yves (2002). Les sourds, c?est comme ça. Paris : Maison des sciences de l?homme.

FISCHER, Susan D. (2008). Sign Languages East and West. In : Piet Van Sterkenburg, Unity and Diversity of Languages. Philadelphia/Amsterdam : John Benjamins Publishing Company.

LANE, Harlan (1991). Quand l?esprit entend. Histoire des sourds-muets. Traduction de l?américain par Jacqueline Henry. Paris : Odile Jacob.

MOSELEY, Christopher (2010). Atlas des langues en danger dans le monde. Paris : Unesco.

UNESCO (2011). Nouvelles initiatives de l?UNESCO en matière de diversité linguistique : http://fr.unesco.org/news/nouvelles-initiatives-unesco-matiere-diversite-linguistique.

UNESCO (2003). Convention de 2003 pour la sauvegarde du patrimoine culturel immatériel : http://www.unesco.org/culture/ich/doc/src/18440-FR.pdf.

           

 

Echéancier

 

Avril 2017 : appel à contribution

            Septembre 2017 : soumission de l?article (version 1)

Octobre-novembre 2017 : retour du comité ; acceptation, proposition de modifications (de la version 1) ou refus

            Fin janvier 2018 : soumission de la version modifiée (version 2)

Février 2018 : retour du comité (concernant la version 2)

            Mars / juin 2018 : soumission de la version finale

Mai / juin 2018 : parution

 

Instructions aux auteurs

 

Merci d?envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr :
            - un fichier en .doc contenant le titre, le nom et l?affiliation de l?auteur (des auteurs)

- deux fichiers anonymes, l?un en format .doc et le deuxième en .pdf,

Pour davantage de détails, les auteurs pourront suivre ce lien : http://tipa.revues.org/222       

 

 

 


[1] Voir par exemple l?image 421, page 334, dans Companys, 2007 ou photo ci-dessus.

 

                  

Top

7-6Speech and Language Processing for Behavioral and Mental Health Research and Applications', Computer Speech and Language (CSL).

                                                                         Call for Papers

                     Special Issue of COMPUTER SPEECH AND LANGUAGE

Speech and Language Processing for Behavioral and Mental Health Research and Applications

The promise of speech and language processing for behavioral and mental health research and clinical applications is profound. Advances in all aspects of speech and language processing, and their integration—ranging from speech activity detection, speaker diarization, and speech recognition to various aspects of spoken language understanding and multimodal paralinguistics—offer novel tools for both scientific discovery and creating innovative ways for clinical screening, diagnostics, and intervention support. Credited to the potential for widespread impact, research sites across all continents are actively engaged in this societally important research area, tackling a rich set of challenges including the inherent multilingual and multicultural underpinnings of behavioural manifestations. The objective of this Special Issue on Speech and Language Processing for Behavioral and Mental Health Applications is to bring together and share these advances in order to shape the future of the field. It will focus on technical issues and applications of speech and language processing for behavioral and mental health applications. Original, previously unpublished submissions are encouraged within (not limited to) the following scope: 

  • Analysis of mental and behavioral states in spoken and written language 
  • Technological support for ecologically- and clinically-valid data collection and pre-processing
  • Robust automatic recognition of behavioral attributes and mental states 
  • Cross-cultural, cross linguistic, cross-domain mathematical approaches and   applications 
  • Subjectivity modelling (mental states perception and behavioral annotation) 
  • Multimodal paralinguistics (e.g., voice, face, gesture) 
  • Neural-mechanisms, physiological response, and interplay with expressed behaviours 
  • Databases and resources to support study of speech and language processing for mental health 
  • Applications: scientific mechanisms, clinical screening, diagnostics, & therapy/treatment support 
  • Example Domains: Autism spectrum disorders, addiction, family and relationship studies, major depressive disorders, suicidality, Alzheimer’s disease

Important Dates

  • Manuscript Due October 31, 2017
  • First Round of Reviews January 15, 2018
  • Second Round of Reviews     April 15, 2018
  • Publication Date June 31, 2018


Guest Editors

  • Chi-Chun Lee, National Tsing Hua University, Taiwan, cclee@ee.nthu.edu.tw
  • Julien Epps, University of New South Wales, Australia, j.epps@unsw.edu.au
  • Daniel Bone, University of Southern California, USA, dbone@usc.edu
  • Ming Li, Sun Yat-sen University, China, liming46@mail.sysu.edu.cn
  • Shrikanth Narayanan, University of Southern California, USA, shri@sipi.usc.edu

 

Submission Procedure

Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guidefor-authors#20000. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://www.evise.com/evise/jrnl/CSL When submitting your papers, authors must select VSI:SLP-Behavior-mHealth as the article type.

 

Top

7-7IEEE CIS Newsletter on Cognitive and Developmental Systems (open access).

Release of the latest issue of the IEEE CIS Newsletter on Cognitive and Developmental Systems (open access).
This is a biannual newsletter addressing the sciences of developmental and cognitive processes in natural and artificial organisms, from humans to robots, at the crossroads of cognitive science, developmental psychology, artificial intelligence and neuroscience. 

It is available at: http://goo.gl/dyrg6s

Featuring dialog:
=== 'Exploring Robotic Minds by Predictive Coding Principle'

== Dialog initiated by Jun Tani
with responses from: Andy Clark, Doug Blank, James Marshall, Lisa Meeden, Stephane Doncieux, Giovanni Pezzulo, Martin Butz, Ezgi Kayhan, Johan Kwisthout and Karl Friston
== Topic: The idea that the brain is pro-actively making predictions of the future at multiple levels of hierarchy has become a central topic to explain human intelligence and to design general artificial intelligence systems.
This dialog discusses whether hierarchical predictive coding enables a paradigm shift in development robotics and AI. In particular, the dialog reviews the importance of various complementary mechanisms to predictive coding, which happen to be right now very actively researched in artificial intelligence: intrinsic motivation and curiosity, multi-goal learning, developmental stages (also called curriculum learning in machine learning), and the role of self-organization. They also underline several major challenges that need to be addressed for general artificial intelligence in autonomous robots, and that current research in deep learning fails to address: 1) the problem of the poverty of stimulus: autonomous robots, like humans, have access to only little data as they have to collect it themselves with severe time and space constraints; 2) the problem of information sampling: which experiments/observations to make to improve one?s world model. Finally, they also discuss the issue of how these mechanisms arise in infants and participate to their development.

Call for new dialog:
=== 'One Developmental Cognitive Architecture to Rule Them All?'
== Dialog initiated by Matthias Rolf, Lorijn Zaadnoordijk, Johan Kwisthout
==  This new dialog asks whether and how it would be useful both epistemologically and in practice to aim towards the development of a ?standard integrated cognitive architecture?, akin to ?standard models? in physics. In par- ticular, they ask this question in the context of understanding development in infants, and of building developmental architectures, thus addressing the issue of architectures that not only learn, but that are adaptive themselves. Those of you interested in reacting to this dialog initiation are welcome to sub- mit a response by November 30th, 2017. The length of each response must be between 600 and 800 words including references (contact pierre-yves.oudeyer@inria.fr).

All issues of the newsletter are all open-access and available at: http://icdl-epirob.org/cdsnl

 
 
 
Top

7-8Special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing
As guest editors of the special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing, we are happy to announce that 13 papers along with a survey article have just been published on IEEE Xplore http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6570655 (Issue 12 ? Dec. 2017).
 
Best regards 
 
Tanja Schultz, Cognitive Systems Lab, Faculty of Computer Science and Mathematics, University of Bremen, Bremen, Germany
Thomas Hueber, GIPSA-lab, CNRS/Grenoble Alpes University, Grenoble, France
Dean J. Krusienski, ASPEN Lab, Biomedical Engineering Institute, Old Dominion University, Norfolk, VA, USA
Jonathan S. Brumberg, Speech and Applied Neuroscience Lab, Speech?Language?Hearing Department, University of Kansas, Lawrence, KS, USA

 
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA