ISCA - International Speech
Communication Association


ISCApad Archive  »  2012  »  ISCApad #167  »  Journals  »  CfP IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'

ISCApad #167

Sunday, May 13, 2012 by Chris Wellekens

7-12 CfP IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'
  
Call For Papers : IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 

'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'

http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/spoken_dialogue.pdf

Recently, there have been an array of advances in both the theory and practice of spoken dialog systems, especially on mobile devices. On theoretical advances, foundational models and algorithms (e.g., Partially Observable Markov Decision Process (POMDP), reinforcement learning, Gaussian process models, etc.) have advanced the state-of-the-art on a number of fronts.  For example, techniques have been presented which improve system robustness, enabling systems to achieve high performance even when faced with recognition inaccuracies.  Other methods have been proposed for learning from interactions, to improve performance automatically.  Still other methods have shown how systems can make use of speech input and output incrementally and in real time, raising levels of naturalness and responsiveness.  

On applications, interesting new results on spoken dialog systems are becoming available from both research settings and deployments 'in the wild', for example on deployed services such as Apple's Siri, Google's voice actions, Bing voice search, Nuance's Dragon Go!, and Vlingo.  Speech input is now commonplace on smart phones, and is well-established as a convenient alternative to keyboard input, for tasks such as control of phone functionalities, dictation of messages, and web search.  Recently, intelligent personal assistants have begun to appear, via both applications and features of the operating system.  Many of these new assistants are much more than a straightforward keyboard replacement - they are first-class multi-modal dialogue systems that support sustained interactions, using spoken language, over multiple turns. New system architectures and engineering algorithms have also been investigated in research labs, which have led to more forward-looking spoken dialog systems. 

This special issue seeks to draw together advances in spoken dialogue systems from both research and industry.  Submissions covering any aspect of spoken dialog systems are welcome.  Specific (but not exhaustive) topics of interest include all of the following in relation to spoken dialogue systems and mobile interfaces:

- theoretical foundations of spoken dialogue system design, learning, evaluation, and simulation
- dialog tracking, including explicit representations of uncertainty in dialog systems, such as Bayesian networks; domain representation and detection
- dialog control, including reinforcement learning, (PO)MDPs, decision theory, utility functions, and personalization for dialog systems
- foundational technologies for dialog systems, including acoustic models, language models, language understanding, text-to-speech, and language generation; incremental approaches to input and output; usage of affect
- applications, settings and practical evaluations, such as voice search, text message dictation, multi-modal interfaces, and usage while driving

Papers must be submitted online.  The bulk of the issue will be original research papers, and priority will be given to the papers with high novelty and originality.  Papers providing a tutorial/overview/survey are also welcome, although space is available for only a limited number of papers of this type.  Tutorial/overview/survey papers will be evaluated on the basis of overall impact.  

- Manuscript submission: http://mc.manuscriptcentral.com/jstsp-ieee
- Information for authors: http://www.signalprocessingsociety.org/publications/periodicals/jstsp/jstsp-author-info/

Dates
Submission of papers:   July 15, 2012
First review  September 15, 2012
Revised Submission:   October 10, 2012
Second review:  November 10, 2012
Submission of final material:  November 20, 2012

Guest editors:
- Kai Yu, co-lead guest editor (Shanghai Jiao Tong University)
- Jason Williams, co-lead guest editor (Microsoft Research)
- Brahim Chaib-draa (Laval University)
- Oliver Lemon (Heriot-Watt University)
- Roberto Pieraccini (ICSI)
- Olivier Pietquin (SUPELEC)
- Pascal Poupart (University of Waterloo)
- Steve Young (University of Cambridge)


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA