ISCA - International Speech
Communication Association


ISCApad Archive  »  2015  »  ISCApad #210  »  Events  »  Other Events  »  (2016-05-07) CfP Designing Speech and Multimodal Interactions for Mobile, Wearable, & Pervasive Applications ,CHI 2016 Workshop, San Jose, CA

ISCApad #210

Sunday, December 13, 2015 by Chris Wellekens

3-3-22 (2016-05-07) CfP Designing Speech and Multimodal Interactions for Mobile, Wearable, & Pervasive Applications ,CHI 2016 Workshop, San Jose, CA
  

Call for Papers

Designing Speech and Multimodal Interactions for Mobile, Wearable, &
Pervasive Applications
CHI 2016 Workshop, San Jose, CA
http://www.dgp.toronto.edu/dsli2016

This workshop aims to develop speech and multimodal interaction as a more
established area of study within HCI, leveraging current engineering
advances in ASR, NLP, TTS, multimodal/gesture recognition, and
brain-computer interfaces. Advances in HCI can reciprocate to the design
of NLP and ASR algorithms that are better informed by the usability
challenges of speech and multimodal interfaces. We also aim to increase
the cohesion between research currently dispersed across multiple areas
(e.g., HCI, wearable design, ASR, NLP, BCI, speech, EMG interaction and
eye-gaze input). Our hope is that by focusing and challenging the research
community on multi-input modalities for wearables, we will energize the
CHI and engineering communities to push the boundaries of what is possible
with wearable, mobile, and pervasive computing, and also make advances in
each of these respective communities.

Through interdisciplinary dialogue, our goal is to create momentum in:
* Formally framing the challenges to the widespread adoption of speech and
natural language interaction,
* Taking concrete steps toward developing a framework of user-centric
design guidelines for speech- and language-based interactive systems,
grounded in good usability practices,
* Establishing directions towards identifying further research
opportunities in designing natural interactions that make use of speech
and natural language, and
* Identifying key challenges and opportunities for enabling and designing
multi-input modalities for a wide range of wearable device classes.

We invite the submission of position papers demonstrating research,
design, practice, or interest in areas related to speech, language, and
multimodal interaction that address one or more of the workshop goals,
with an emphasis, but not limited to, applications such as mobile,
wearable, or pervasive computing.

Position papers should be 4 to 6 pages long, in the ACM SIGCHI extended
abstract format and include a brief statement justifying the fit with the
workshop's topic. Summaries of previous research are welcome if they
contribute to the workshop's multidisciplinary goals (e.g. a speech
processing research in clear need of HCI expertise). Submissions will be
reviewed according to:
* Fit with the workshop topic
* Potential to contribute to the workshop goals
* A demonstrated track of research in the topic of the workshop (HCI or
speech/multimodal processing, with an interest in both areas).

To submit a paper please follow:
http://www.dgp.toronto.edu/dsli2016/submissions

Important Dates:
* January 13th, 2016: Submission of position papers
* January 20th, 2016: Notification of acceptance
* February 10th, 2016: Submission of camera-ready accepted papers



--


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA