ISCA - International Speech
Communication Association


ISCApad Archive  »  2015  »  ISCApad #199  »  Journals  »  [Special Issue] Deep Learning for Speech and Language Processing Applications, EURASIP

ISCApad #199

Sunday, January 18, 2015 by Chris Wellekens

7-11 [Special Issue] Deep Learning for Speech and Language Processing Applications, EURASIP
  
Manuscript due: Dec. 15, 2014

Journal: EURASIP Journal on Audio, Speech, and Music Processing

Description:

Deep learning techniques have enjoyed enormous success in the speech and
language processing community over the past few years, beating previous
state-of-the-art approaches to acoustic modeling, language modeling, and
natural language processing. A common theme across different tasks is
that that the depth of the network allows useful representations to be
learned. For example, in acoustic modeling, the ability of deep
architectures to disentangle multiple factors of variation in the input,
such as various speaker-dependent effects on speech acoustics, has led
to excellent improvements in speech recognition performance on a wide
variety of tasks. In addition, in natural language processing and
language modeling tasks, integrating learned vector space models of
words, which perform smoothing and clustering based on semantic and
syntactic information contained in word contexts, with recurrent or
recursive architectures has led to significant advances.
 
We as a community should continue to understand what makes deep learning
successful for speech and language, and how further improvements can be
achieved. For example, just as deep networks made us re-think the input
feature representation pipeline used for speech recognition, we should
continue to push deep learning into other areas of the speech
recognition pipeline.
 
In addition, new architectures, such as convolutional neural networks
and recurrent networks using long short-term memory cells, have improved
performance, and we believe alternative architectures can improve
performance further. Secondly, optimization of large neural network
models remains a huge challenge, both because of computational cost and
amount of data, which could possibly be unsupervised.


Topics of interest include:

* New deep-learning architectures and algorithms
* Optimization strategies for deep learning
* Improved adaptation methods for deep learning
* Unsupervised and semi-supervised training for deep learning
* Novel applications of deep learning for speech and language tasks
* Theoretical and empirical understanding of deep learning for speech and 
 language
* Deep-learning toolkits and/or platforms for big data

Lead Guest Editor:

* Tara Sainath, Google Inc., USA

Guest Editors:

* Michiel Bacchiani, Google Inc., USA
* Hui Jiang, York University, Canada
* Brian Kingsbury, Thomas J. Watson Research Center, USA
* Hermann Ney, RWTH Aachen, Germany
* Frank Seide, Microsoft Research Asia, China
* Andrew Senior, Google Inc., USA

For more information about this special issue, please visit:

http://si.eurasip.org/issues/38/deep-learning-for-speech-and-language-processing/
_______________________________________________

Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA