ISCApad Archive » 2013 » ISCApad #178 » Journals » CfP ACM TiiS special issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots |
ISCApad #178 |
Wednesday, April 10, 2013 by Chris Wellekens |
Call for Papers Special Issue of the ACM Transactions on Interactive Intelligent Systems on MACHINE LEARNING FOR MULTIPLE MODALITIES IN INTERACTIVE SYSTEMS AND ROBOTS Main submission deadline: February 28th, 2013 http://tiis.acm.org/special-issues.html AIMS AND SCOPE This special issue will highlight research that applies machine learning to robots and other systems that interact with users through more than one modality, such as speech, touch, gestures, and vision. Interactive systems such as multimodal interfaces, robots, and virtual agents often use some combination of these modalities to communicate meaningfully. For example, a robot may coordinate its speech with its actions, taking into account visual feedback during their execution. Alternatively, a multimodal system can adapt its input and output modalities to the user's goals, workload, and surroundings. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. This special issue aims to help fill this gap. The dimensions listed below indicate the range of work that is relevant to the special issue. Each article will normally represent one or more points on each of these dimensions. In case of doubt about the relevance of your topic, please contact the special issue associate editors. TOPIC DIMENSIONS System Types - Interactive robots - Embodied virtual characters - Avatars - Multimodal systems Machine Learning Paradigms - Reinforcement learning - Active learning - Supervised learning - Unsupervised learning - Any other learning paradigm Functions to Which Machine Learning Is Applied - Multimodal recognition and understanding in dialog with users - Multimodal generation to present information through several channels - Alignment of gestures with verbal output during interaction - Adaptation of system skills through interaction with human users - Any other functions, especially combining two or all of speech, touch, gestures, and vision SPECIAL ISSUE ASSOCIATE EDITORS - Heriberto Cuayahuitl, Heriot-Watt University, UK (contact: h.cuayahuitl[at]gmail[dot]com) - Lutz Frommberger, University of Bremen, Germany - Nina Dethlefs, Heriot-Watt University, UK - Antoine Raux, Honda Research Institute, USA - Matthew Marge, Carnegie Mellon University, USA - Hendrik Zender, Nuance Communications, Germany IMPORTANT DATES - By February 28th, 2013: Submission of manuscripts - By June 12th, 2013: Notification about decisions on initial submissions - By September 10th, 2013: Submission of revised manuscripts - By November 9th, 2013: Notification about decisions on revised manuscripts - By December 9th, 2013: Submission of manuscripts with final minor changes - Starting January, 2014: Publication of the special issue on the TiiS website, in the ACM Digital Library, and subsequently as a printed issue HOW TO SUBMIT Please see the instructions for authors on the TiiS website (tiis.acm.org). ABOUT ACM TiiS TiiS (pronounced 'T double-eye S'), launched in 2010, is an ACM journal for research about intelligent systems that people interact with. |
Back | Top |