ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #241  »  Journals  »  Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction

ISCApad #241

Tuesday, July 10, 2018 by Chris Wellekens

7-4 Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction
  

Multimodal Interaction in Automotive Applications

=================================================

 

With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today?s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze.

In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better ? plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

 

This is why the biggest innovations in today?s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today?s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow?s cars.

In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following:

* In-vehicle multimodal interaction concepts

* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts

* Reducing driver distraction and cognitive load and demand with multimodal interaction

* (pro-active) in-car personal assistant systems

* Driver assistance systems

* Information access (search, browsing etc) in the car

* Interfaces for navigation

* Text input and output while driving

* Biometrics and physiological sensors as a user interface component

* Multimodal affective intelligent interfaces

* Multimodal automotive user-interface frameworks and toolkits

* Naturalistic/field studies of multimodal automotive user interfaces

* Multimodal automotive user-interface standards

* Detecting and estimating user intentions employing multiple modalities

 

Guest Editors

=============

Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany

Phil Cohen, Voicebox, USA

Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany

 

Submission Instructions

=======================

 

1-page abstract submission: Feb 5, 2018

Invitation for full submission: March 15, 2018

Full Submission: April 28, 2018

Notification about acceptance: June 15, 2018

Final article submission: July 15, 2018

Tentative Publication: ~ Sept 2018

 

Companion website: https://sites.google.com/view/multimodalautomotive/

 

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA