ISCA - International Speech
Communication Association


ISCApad Archive  »  2016  »  ISCApad #219  »  Journals  »  IEEE Trans. on Affective Computing, Special Issue on Laughter Computing: towards machines able to deal with laughter

ISCApad #219

Friday, September 23, 2016 by Chris Wellekens

7-7 IEEE Trans. on Affective Computing, Special Issue on Laughter Computing: towards machines able to deal with laughter
  

IEEE Transactions on Affective Computing
Special Issue on Laughter Computing: towards machines able to deal with laughter


TOPIC SUMMARY:
Laughter is a significant feature of human-human communication. It
conveys various meanings and accompanies different emotions, such as
amusement, relief, irony, or embarrassment. It has strong social
dimensions: e.g., it can reduce the sense of threat in a group and
facilitate sociability and cooperation. It also may have positive
effects on learning, creativity, health, and well-being. Because of its
relevance in human-human communication, research on laughter deserves
important attention from the Affective Computing community. Several
recent initiatives, such as the Special Session on Laughter at the 6th
International Conference on Affective Computing and Intelligent
Interaction (ACII2015) and the series of Interdisciplinary Workshops on
Laughter and other Non-Verbal Vocalizations in Speech, witness the
importance of the topic. Recent research projects focused on laughter by
investigating automatic laughter processing, and by developing proof-of
concepts, experiments, and prototypes exploiting laughter for enhancing
human-computer interaction.
Most research questions, however, are still unanswered. These address,
for example, theoretical issues (e.g., how can laughter be modelled and
analysed as a multimodal phenomenon, including non-verbal full-body
expression? Which is the relation between different expressions of
laughter, their perceived meanings and their social functions?),
analysis (e.g., to what extent is multimodal analysis of laughter in
complex social scenarios feasible and effective?), and synthesis
techniques (e.g., can speech laughter be synthesized effectively?).
Overcoming the lack of HCI/HRI/HHI applications that exploit the
positive (as well as a critical analysis of negative) effects of
laughter is also of high interest. The issue of acceptability of
laughing machines, either virtual agent or robot, needs to be addressed as well.
The goal of this special issue is to gather recent achievements in
laughter computing in order to trigger new research directions in this
field. The interest is on computational models that deal with laughter
in human-computer and human-human interaction. Laughter is characterized
by a complex expressive behaviour that includes major expressive
modalities: auditory, facial expressions, body movements and postural
attitudes, and physiological signals. This special issue aims at taking
into account the multimodal nature of laughter and its variety of
contexts and meanings, and providing an interdisciplinary perspective of
ongoing scientific research and ICT developments.

Topics of interest include but are not limited to:
? Multimodal laughter detection and synthesis
? Computational models of laughter mimicry and contagion
? Multimodal datasets of different laughter types in both controlled and ecological context
? Laughter analysis in human-human communication
? Individual differences in the expression of laughter
? Modelling of different communicative meanings of laughter
? Laughter-based applications in HCI/HRI/HHI and future user-centric media
? Acceptability of laughter in HCI/HRI applications
? Laughter elicitation mechanisms (e.g., 'computational humour', KANSEI)
? Laughter as an expression of different emotions (e.g., amusement,
embarrassment, relief, and so on)

IMPORTANT DATES:
Deadline for submissions: June 24, 2016
Review results: September 16, 2016
Deadline for submission of revised manuscripts: October 14, 2016
Final reviews: November 11, 2016

GUEST EDITORS:
? M. Mancini, DIBRIS, University of Genoa (Italy), maurizio.mancini@unige.it
? R. Niewiadomski, DIBRIS, University of Genoa (Italy),
radoslaw.niewiadomski@dibris.unige.it
? S. Hashimoto, SHALAB, Dept. of Applied Physics, Waseda University
(Japan), shuji@waseda.jp
? M.E. Foster, School of Computing Science, University of Glasgow
(Scotland, UK), maryellen.foster@glasgow.ac.uk
? S. Scherer, Institute for Creative Technologies, University of
Southern California (USA), scherer@ict.usc.edu
? G. Volpe, DIBRIS, University of Genoa (Italy), gualtiero.volpe@unige.it

SUBMISSION GUIDELINES:
Prospective authors are invited to submit their manuscripts
electronically after the ?open for submissions? date, adhering to the
IEEE Transactions on Affective Computing guidelines
(http://www.computer.org/web/tac/author). Please submit your papers
through the online system (https://mc.manuscriptcentral.com/taffc-cs)
and be sure to select the special issue or special section name.
Manuscripts should not be published or currently submitted for
publication elsewhere. Please submit only full papers intended for
review, not abstracts, to the ScholarOne portal. If requested, abstracts
should be sent by e-mail to the Guest Editors directly.


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA