|    | CHI 2016 Workshop on Designing Speech and Multimodal Interactions for  Mobile, Wearable, & Pervasive Applications (Final CFP and deadline  extension)
   Important Dates:  * January 18th, 2016: Submission of position papers  * January 25th, 2016: Notification of acceptance  * February 10th, 2016: Submission of camera-ready accepted papers  * May 7-8, 2016: Workshop http://www.dgp.toronto.edu/dsli2016  CHI 2016, San Jose, CA, USA
   This workshop aims to develop speech and multimodal interaction as a more  established area of study within HCI, leveraging current engineering  advances in ASR, NLP, TTS, multimodal/gesture recognition, and  brain-computer interfaces. Advances in HCI can reciprocate to the design  of NLP and ASR algorithms that are better informed by the usability  challenges of speech and multimodal interfaces. We also aim to increase  the cohesion between research currently dispersed across multiple areas  (e.g., HCI, wearable design, ASR, NLP, BCI, speech, EMG interaction and  eye-gaze input). Our hope is that by focusing and challenging the research  community on multi-input modalities for wearables, we will energize the  CHI and engineering communities to push the boundaries of what is possible  with wearable, mobile, and pervasive computing, and also make advances in  each of these respective communities.
   Through interdisciplinary dialogue, our goal is to create momentum in:  * Formally framing the challenges to the widespread adoption of speech and  natural language interaction,  * Taking concrete steps toward developing a framework of user-centric  design guidelines for speech- and language-based interactive systems,  grounded in good usability practices,  * Establishing directions towards identifying further research  opportunities in designing natural interactions that make use of speech  and natural language, and  * Identifying key challenges and opportunities for enabling and designing  multi-input modalities for a wide range of wearable device classes.
   We invite the submission of position papers demonstrating research,  design, practice, or interest in areas related to speech, language, and  multimodal interaction that address one or more of the workshop goals,  with an emphasis, but not limited to, applications such as mobile,  wearable, or pervasive computing.
   Position papers should be 4 to 6 pages long, in the ACM SIGCHI extended  abstract format and include a brief statement justifying the fit with the  workshop's topic. Summaries of previous research are welcome if they  contribute to the workshop's multidisciplinary goals (e.g. a speech  processing research in clear need of HCI expertise). Submissions will be  reviewed according to:  * Fit with the workshop topic  * Potential to contribute to the workshop goals  * A demonstrated track of research in the topic of the workshop (HCI or  speech/multimodal processing, with an interest in both areas).
   To submit a paper please follow: http://www.dgp.toronto.edu/dsli2016/submissions
 
 
   --
   |