ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #246  »  Journals

ISCApad #246

Thursday, December 13, 2018 by Chris Wellekens

7 Journals
7-1Special issue of JSTSP on Far-Field Speech Processing in the Era of Deep Learning Speech enhancement, Separation and Recognition

Special Issue on

Far-Field Speech Processing in the Era of Deep Learning
Speech Enhancement, Separation and Recognition

Far-field speech processing has become an active field of research due to recent scientific advancements and its widespread use in commercial products. This field of research deals with speech enhancement and recognition using one or more microphones placed at a distance from one or more speakers. Although the topic has been studied for a long time, recent successful applications (starting with Amazon Echo) and challenge activities (CHiME and REVERB) greatly accelerated progress in this field. Concurrently, deep learning has created a new paradigm that has led to major breakthroughs both in front-end signal enhancement, extraction, and separation, as well as in back-end speech recognition. Furthermore more deep learning provides a means of jointly optimizing all components of far-field speech processing in an end-to-end fashion. This special Issue is a forum to gather the latest findings in this very active field of research, which is of high relevance for the audio and acoustics, speech and language, and machine learning for signal processing communities. This issue is an official post-activity of the ICASSP 2018 special session 'Multi-Microphone Speech Recognition' and the 5th CHiME Speech Separation and Recognition Challenge (CHiME-5 challenge).

Topics of interest in this special issue include (but are not limited to): 

  • Multi-/single-channel speech enhancement (e.g., dereverberation, noise reduction, separation)
  • Multi-/single-channel noise robust speech recognition
  • Far-field speech processing systems
  • End-to-end integration of speech enhancement, separation, and/or recognition
Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.

Guest Editors

  • Shinji Watanabe, Johns Hopkins University, USA
  • Shoko Araki, NTT, Japan
  • Michiel Bacchiani, Google, USA
  • Reinhold Haeb-Umbach, Paderborn University, Germany
  • Michael L. Seltzer, Facebook, USA

Important Dates

  • Submission deadline November 10, 2018
  • 1st review completed: January 15, 2019
  • Revised manuscript due: March 15, 2019
  • 2nd review completed: May 15, 2019
  • Final manuscript due: June 15, 2019
  • Publication: August 2019



 
 
Back  Top

7-2IEEE J-STSP Special Issue on Acoustic Source Localization and Tracking in Dynamic Real-life Scenes

IEEE J-STSP Special Issue on

Acoustic Source Localization and Tracking
in Dynamic Real-life Scenes

 

Summary: Acoustic source localization is a well-studied topic in signal processing, but most traditional methods incorporate simplifying assumptions such as a point source, free-field propagation of the sound wave, static acoustic sources, time-invariant sensor constellations, and simple noise fields. However, these assumptions may be seriously violated in a range of emerging applications, such as audio recording with mobile devices (e.g. cell phones, extreme cameras, and robots), video conferencing on the go, and recording for 3D reproduction and virtual reality. In these applications, the environment is extremely challenging, with spatially distributed sources, reverberation, complex noise fields, multiple concurrent speakers, interferences, and time-varying sources and sensors positions.

The proposed special issue aims to present recent advances in the development of signal processing methods for localization and tracking of acoustic sources and the associated theory and applications. To address the challenges raised by the real-life environment, novel methods that use modern array processing, speech processing and data inference tools, become a necessity.

As these challenges involve both audio processing and sensor arrays, this proposal is timely and relevant to researchers from both acoustic signal processing domain and array processing domain. The guest editors are therefore coming from both communities. It comprises of current and past chairs of the respective technical committees (Audio and Acoustic Signal Processing ? AASP and Sensor and Sensor Array and Multichannel ? SAM TCs). This special issue proposal follows successful special sessions in major conferences: 'Learning-based Sound Source Localization and Spatial Information Retrieval' ICASSP2016), 'Speaker localization in dynamic real-life environments? (ICASSP2017), ?Acoustic Scene Analysis and Signal Enhancement using Microphone Array? (EUSIPCO2017), ?Acoustical Signal Processing for Hearables? (EUSIPCO2017).

Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.

Topics of interest include (but are not limited to):
  • Localization in multipath and reverberant environments
  • Tracking moving sources
  • Performance bounds on localization and tracking
  • Dynamic platforms for localization and tracking (e.g. robots, cell phones, hearables, hearing aids)
  • Binaural localization using head-related transfer functions (HRTFs)
  • Localization ?behind the walls?
  • Distributed algorithms for wireless acoustic sensor networks (WASNs) for localization
  • Simultaneous localization and mapping (SLAM)
  • Acoustic scene analysis
  • Detection and localization acoustic events (e.g. falls, footsteps)
  • Learning-based localization (e.g. dimensionality-reduction, regression, clustering, classification, sparse representations, dictionary learning)
  • Bayesian localization and tracking algorithms (e.g. sequential Monte-Carlo, particle filters, Probability Hypothesis Density (PHD) methods)
  • Localization using ray tracing
 
List of Guest Editors:
  • Sharon Gannot
    Bar-Ilan University, Israel
  •  Martin Haardt
    Technische Universität Ilmenau, Germany
  • Walter Kellermann
    Friedrich-Alexander Universität Erlangen-Nürnberg, Germany
  • Peter Willett
    University of Connecticut, USA
Important Dates (suggested):
  • Manuscript submission: July 1, 2018 (EXTENDED)*
  • 1st review completed: September 1, 2018
  • Revised manuscript due: November 1, 2018
  • 2nd review completed: December 15, 2018
  • Final manuscript due: January 15, 2019
  • Publication: March 2019
*The manuscript submission due date to this special issue has been extended by two months, to 1 Jul 18, to allow interested authors to use the dataset from the LOCATA challenge organized by the AASP TC. The use of this dataset is not mandatory.
Back  Top

7-3Topical Issue on Intelligent Methods for Textual Information Retrieval

Topical Issue on Intelligent Methods for Textual Information Retrieval
============================
Call for Papers: www.degruyter.com/page/1831
Submission deadline: 20th February 2019
============================
EDITED BY
Guest Editors:
Adrian-Gabriel Chifu, Aix-Marseille Université, France
Sébastien Fournier, Aix-Marseille Université, France
Advisory Editor:
Patrice Bellot, Aix-Marseille Université / CNRS, France
============================
DESCRIPTION
Machine learning approaches for intelligent text mining and retrieval are actively
studied by researchers in natural language processing, information retrieval and other
related fields. While supervised methods usually attain much better performance than
unsupervised methods, they also require annotated data which is not always available or
easy to obtain. Hence, we encourage the submission of supervised, unsupervised or hybrid
methods for intelligent text retrieval tasks. Methods studying alternative learning
paradigms, e.g. semi-supervised learning, weakly-supervised learning, zero-shot learning,
but also transfer learning, are very welcome as well.
This thematic special issue covers three research areas: natural language processing,
computational linguistics and information retrieval. The submissions may address, but are
not limited to, the following topics:
?        information retrieval
?        information extraction
?        query processing
?         word sense disambiguation/discrimination
?         machine learning in NLP
?         sentiment analysis and opinion mining
?         contradiction and controversy detection
?         social media
?         summarization
?         text mining
?         text categorization and clustering
The submitted papers will undergo peer review process before they can be accepted.
Notification of acceptance will be communicated as we progress with the review process.
============================
HOW TO SUBMIT
Before submission authors should carefully read the Instruction for Authors:
www.degruyter.com/view/supplement/s22991093_Instruction_for_Authors.pdf
Manuscripts can be written in TeX, LaTeX (strongly recommended) - the journal?s LATEX
template. Please note that we do not accept papers in Plain TEX format. Text files can be
also submitted as standard DOCUMENT (.DOC) which is acceptable if the submission in LATEX
is not possible. For an initial submission, the authors are strongly advised to upload
their entire manuscript, including tables and figures, as a single PDF file.

All submissions to the Topical Issue must be made electronically via online submission
system Editorial Manager:
www.editorialmanager.com/opencs/
All manuscripts will undergo the standard peer-review procedure (single blind, at least
two independent reviews). When entering your submission via online submission system
please choose the article type ?TI on Information Retrieval?.

The deadline for submission is 20th February 2019, but individual papers will be reviewed
and published online on an ongoing basis.

Contributors to the Topical Issue will benefit from:
?        NO submission and publication FEES
?         indexation by Clarivate Analytics - Web of Science (ESCI) and SCOPUS
?         fair and constructive peer review provided by experts in the field
?         no space constraints
?         convenient, web-based paper submission and tracking system ? Editorial Manager
?         free language assistance for authors from non-English speaking regions
?         fast online publication upon completing the publishing process
?         better visibility due to Open Access
?         long-term preservation of the content (articles archived in Portico)
?         extensive post-publication promotion for selected papers
In case of any questions please contact Dr. Justyna ?uk, Managing Editor,
Justyna.Zuk@degruyteropen.com.We are looking forward to your submission.

Back  Top

7-4Journal on Special Topics in Signal Processing: special issue on Machine Learning for Audio Signal Processing

Special Issue on Data Science:

Machine Learning for Audio Signal Processing

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the 'era of machine learning.' There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

This special issue aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques in the area of sound and music signal processing, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:

  • audio information retrieval using machine learning;
  • audio synthesis with given contextual or musical constraints using machine learning;
  • audio source separation using machine learning;
  • audio transformations (e.g., sound morphing, style transfer) using machine learning;
  • unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
  • applications/optimization of generative adversarial networks for audio;
  • cognitively inspired machine learning models of sound cognition;
  • mathematical foundations of machine learning for audio signal processing.

This call addresses audio signal processing for speech, acoustic scenes, and music.

Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript with the web submission system.

Important Dates

  • Submission deadline: October 1, 2018
  • 1st review completed: December 1, 2018
  • Revised manuscript due: February 1, 2019
  • 2nd review completed: March 1, 2019
  • Final manuscript due: April 1, 2019
  • Publication: May 2019

 

Guest Editors

Back  Top

7-5Revue TIPA: Emo-langages : Vers une approche transversale des langages dans leurs dynamiques émotionnelles et créatives

APPEL À CONTRIBUTIONS

 

Revue TIPA n°35, 2019
Travaux interdisciplinaires sur la parole et le langage

https://journals.openedition.org/tipa/

 

Emo-langages : Vers une approche transversale des langages dans leurs dynamiques émotionnelles et créatives

 

Éditrice invitée

 

Françoise Berdal-Masuy
ECLE (Emotion and Creativity in Language Education)

 

Les émotions font, depuis une petite vingtaine d’années, l’objet de l’attention des chercheurs dans de nombreuses disciplines jusqu’à donner naissance à un nouveau domaine de recherche, celui des « sciences affectives » (Sander, 2015). Les propositions du neuroscientifique et philosophe Antonio Damasio (1995) sur les liens entre raison et émotions ont contribué à la prise de conscience de la synergie de celles-ci dans la communication, y compris l’apprentissage et l’enseignement. En 2011, le linguiste Christian Plantin publie « Les bonnes raisons des émotions », qui va stimuler de nouvelles approches dans les sciences du langage, désormais attentives à mettre en lumière le rôle « raisonnable ou raisonné » joué par les émotions dans le discours. De nombreux événements scientifiques consacrés à ce sujet durant la seule année académique 2017-2018 témoignent de cet intérêt : « Emotissage : Affects dans l’enseignement-apprentissage des langues » du 4 au 7 juillet 2017 à Louvain-la-Neuve (Belgique), « Affect in language » le 2 février 2018 à Helsinki (Finlande), « Affects, émotions et expressivité en discours spécialisés » le 2 mars 2018 à Lyon (France), « Langage et émotions » les 25 et 26 juin 2018 à Montpellier (France) et « Language, Education and Emotions » du 26 au 28 novembre 2018 à Anvers (Belgique) (liste non exhaustive). Le thème est appelé à évoluer constamment.

Réhabilités par les neurosciences, les affects et leur dimension culturelle sont ainsi considérés comme partie prenante de la construction de notre représentation du monde. Cet essor de la recherche sur les émotions / affects dépasse largement le plan méthodologique et théorique, il va de pair avec les mutations de la société et l’incapacité des modèles existants à y répondre. Une deuxième dimension est dès lors convoquée, rendue nécessaire par un monde complexe et changeant, celle de la créativité afin que, conjuguée à la dimension affective, elle rende possible l’innovation (Lison, 2018) qui donne vie au futur.

C’est donc la triade « émotions », « créativité » et « langages » et l’étude de leur interaction dynamique qui fait l’objet de ce numéro.

Les émotions peuvent être décrites en psychologie selon deux approches principales : soit de façon catégorielle en distinguant les émotions primaires (la joie, la tristesse, la surprise, etc.) des émotions secondaires en lien avec la relation sociale (la honte, la jalousie, le bonheur, etc.), soit selon une approche dimensionnelle sur deux axes : celui de la valence, qui peut être positive comme négative, et celui du niveau d’éveil, qui peut correspondre à une faible ou à une forte activation physiologique (Botella, 2015).

Quant à la créativité, il faut se tourner vers l’approche multivariée en psychologie différentielle de Lubart (2015) pour découvrir que, fruit de plusieurs facteurs (cognitif, conatif, environnemental et émotionnel), elle est définie comme la capacité à produire un produit ou un processus « nouveau » et « adapté » au contexte, le facteur émotionnel jouant un rôle important dans le processus créatif (Capron Puozzo, 2015).

Enfin, le terme « langage » comprend ici à la fois le langage verbal et corporel. « Et…, pour autant que le réductionnisme n’entraîne pas les neurosciences, celles-ci pourraient bien contribuer à expliciter davantage comment le corps vient à se nouer au langage, ou vice-versa » (Crommelinck & Lebrun, 2017, 173). Quant à Antonio Damasio, il énonce ceci : « Notre organisme contient un corps proprement dit, un système nerveux et un esprit provenant de ces deux éléments » (Damasio, 2017, 100). Il est opportun de prendre aussi en compte la réflexion des artistes de la scène (comédiens et danseurs), dans un dépassement du clivage sciences et arts via les corps qui traduisent extraordinairement le partage des émotions (Pairon, 2018).

Plusieurs travaux ont mis en évidence les liens entre émotions et langues (Dewaele, 2010 ; Pavlenko, 2014 ; Berdal-Masuy & Pairon, 2015 ; Baider, Cislaru et Coffey, 2015). Avec ce numéro, nous invitons, les membres de disciplines différentes (psychologie, neurosciences, sociologie, pédagogie, sciences du langage, champ de la réhabilitation et arts de la scène) à mobiliser des méthodologies plurielles et à partager leur apport et expérience sous forme de texte ou/ et vidéo afin de stimuler des connexions transversales originales entre émotions, créativité et langages, pour une approche stimulante des dynamiques à l’œuvre entre -et au cœur de- ces trois concepts.

Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteur(e)s »). Une longueur entre 10 et 12 pages est souhaitée pour chacun des articles, soit environ 35 000 - 48 000 caractères ou 6 000 - 8 000 mots (bibliographie, tableaux et annexes inclus). Les auteur(e)s sont prié(e)s de fournir un résumé de l’article dans la langue de l’article (français ou anglais ; entre 700 - 1300 caractères) ainsi qu’un résumé long d’environ deux pages (environ 8 000 - 9 000 caractères, dans l’autre langue : français si l’article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais).

 

Échéancier

·         2 juillet 2018 : premier appel à contribution

·         15 septembre 2018 : second appel à contribution

·         15 décembre 2018 : soumission de l’article (version 1)

·         15 février 2019 : retour du comité scientifique ; acceptation, proposition de modifications (de la version 1) ou refus

·         15 mars 2019 : soumission de la version modifiée (version 2)

·         15 mai 2019 : retour du comité (concernant la version finale)

·         15 juin 2019 : parution

Instructions pour les auteur(e)s

Merci d’envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et francoise.masuy@uclouvain :

- un fichier en .doc contenant le titre, le nom et l’affiliation de l’auteur (des auteurs)
- deux fichiers anonymes, l’un en format .doc et le deuxième en .pdf.

Pour la transcription des énoncés traduits en langue des signes, nous vous suggérons la convention suivante.
Pour davantage de détails, veuillez consulter la page des « consignes aux auteur(e)s ».


Références bibliographiques

Baider, F., Cislaru, G., & Coffey, S. 2015. Apprentissage, enseignement et affects. Le langage et l’homme. L1.

Berdal-Masuy, F., & Pairon, J. 2015. Affects et acquisition des langues. Le langage et l’homme. L2.

Botella, M. 2015. « Les émotions en psychologie : définitions et descriptions ». Dans Le langage et l’homme. L(2), 9-22.

Capron Puozzo, I. 2015. « Emotions et apprentissage des langues dans une pédagogie de la créativité ». Affects et acquisition des langues. Le langage et l’homme. L(2), 95-114.

Crommelinck, M. et Lebrun, J.-P. 2017. Un cerveau pensant : entre plasticité et stabilité. Toulouse : Eres

Damasio, A. 1995. L’erreur de Descartes – La raison des émotions. Paris : Odile Jacob.

Damasio, A. 2017. L’ordre étrange des choses. Paris : Odile Jacob.

Dewaele, J.-M. 2010. Emotions in Multiple Languages. Basingstole / Palgrave / Mac Millan.

Lison, C. 2018. « Quand j’innove, qu’est-ce-que cela donne réellement ? », Conférence plénière au colloque de Lausanne Innovation et pédagogie 15-16 février 2018.

Lubart, T. 2015. Psychologie de la créativité (2 ed.). Paris : Colin.

Pairon, J. 2018. « Perception des émotions et élaboration des cultures ». Communication présentée à la 8e édition de la conférence internationale Intercultural pragmatics & Communication, Chypre, 8-10 juin 2018.

Pavlenko, A. 2014. The Bilingual Mind And What It Tells Us about Language and Thought. Cambridge : Cambridge University Press.

Plantin, C. 2011. Les bonnes raisons des émotions. Principes et méthode pour l’étude du discours émotionné. Berne : Peter Lang.

Sander, D. 2015. Le monde des émotions. Paris : Belin.

 

Comité scientifique

Les members du groupe ECLE (Emotion and Creativity in Language Education) de l’AILA (International Association of Applied Linguistics)

Fabienne Baider, Université de Chypre

Françoise Berdal-Masuy, Université de Louvain

Marion Botella, Université Paris Descartes

Isabelle Capron Puozzo, Haute école pédagogique du canton de Vaud

Cristelle Cavalla, Université Sorbonne Nouvelle

Simon Coffey, King’s college London

Laurence Mettewie, Université de Namur

Jacqueline Pairon, Université de Louvain

Christian Plantin, Université de Lyon

Back  Top

7-6IEEE CIS Newsletter on Cognitive and Developmental Systems (open access).
Dear colleagues,

we are happy to announce the release of the latest issue of the IEEE CIS Newsletter on Cognitive and Developmental Systems (open access).
This is a biannual newsletter addressing the sciences of developmental and cognitive processes in natural and artificial organisms, from humans to robots, at the crossroads of cognitive science, developmental psychology, artificial intelligence, machine learning and neuroscience. 

It is available at: https://goo.gl/NAwBfD

Featuring dialog:
=== 'Curiosity as Driver of Extreme Specialization in Humans'
== Dialog initiated by Celeste Kidd
with responses from: Elizabeth Bonawitz, Maya Zhe Wang, Brian Sweis, Benjamin Hayden, Susan Engel, Abigail Hsiung, Shabnam Hakimi, Alison Adcock, Moritz Daum, Arjun Shankar, Tobias Hauser, Goren Gordon and Perry Zurn
== Topic: Curiosity-driven learning is probably one of the most fundamental mechanisms in human learning, and yet it is also probably one of the least understood. Broadly construed as spontaneous exploration and engagement with activities or material without any extrinsic goal (as opposed to searching for information useful for an extrinsic goal), many mysteries remain to be uncovered. What are the causal links between curiosity and learning? How does prior knowledge about a topic or an activity relates to curiosity about this topic? What is the role of curiosity in life-span development? Can human curiosity explain the apparently unique tendency of humans for extreme specialization? Reversely, how do different forms of curiosity (diversive or specific) evolve as children grow up and become adults? While early computational models of curiosity propose theoretical approaches to understand their cognitive mechanisms, how can we understand the affective/ emotional dimensions of curiosity? And how has the linguistic concept of ?curiosity? evolved in occidental culture?

Call for new dialog:
=== « Leveraging Adaptive Games to Learn How to Help Children Learn Effectively'
== Dialog initiated by George Kachergis
==  Topic: How can one achieve efficiently ?translational educational sciences? and get these principles used in real-world large-scale educational technologies? In this dialog, Georges Kachergis highlights challenges related to collaborations between cognitive scientists and game developers, how to deploy real world experiments, and how to enable scientific understanding when many variables cannot easily be controlled? Those of you interested in reacting to this dialog initiation are welcome to submit a response by December 15th, 2018. The length of each response must be between 600 and 800 words including references (contact pierre-yves.oudeyer@inria.fr).
 
Let us remind you that all issues of the newsletter are all open-access and available at: https://goo.gl/ZjjZNz

I wish you a stimulating reading!

Best regards,

Pierre-Yves Oudeyer,
Editor of the IEEE CIS Newsletter on Cognitive and Developmental Systems
Research director, Inria
Head of Flowers project-team
Inria and Ensta ParisTech, France
http://www.pyoudeyer.com
 
and 
 
Fabien Benureau, Editorial Assistant
Cognitive NeuroRobotics Unit, 
Okinawa Institute of Science and Technology
1919-1 Tancha, Onna, Okinawa
Japan
Back  Top

7-7Auditory Displays and Auditory User Interfaces

Auditory Displays and Auditory User Interfaces

Art-Design-Science-Research

 

Guest Editors:

Myounghoon Jeon, Virginia Tech, USA (myounghoonjeon@vt.edu)

Areti Andreopoulou, University of Athens, Greece (a.andreopoulou@music.uoa.gr)

Brian FG Katz, Sorbonne University, France (brian.katz@sorbonne-universite.fr)

 

Deadline for paper submission: May 1, 2019

 

This special issue concerning Auditory Displays and Auditory User Interfaces: Art-Design-Science-Research (ADSR) is motivated by the theme of the 2018 Conference of ICAD (International Community for Auditory Display), a wordplay on the term ?ADSR? (Attack-Decay-Sustain-Release), commonly used in sound-related domains. This is an open call to authors for contributions under this broad theme. We welcome technical, theoretical, and empirical papers that can contribute to any aspect (art, design, science, and research) of auditory displays and auditory user interfaces.

 

Designers and researchers have tried to make auditory displays and auditory user interfaces more useful in numerous areas, extending typical visuo-centric interactions to multimodal and multisensorial systems. Application areas include education, assistive technologies, auditory wayfinding, auditory graphs, speech interfaces, virtual and augmented reality environments, and associated perceptual, cognitive, technical, and technological research and development. Research through design or embedded design research has recently become more pervasive for auditory display designers. Hence, we welcome all types of ?design? activities as a necessary process in auditory display. In addition, methodical evaluation and analysis have become more prominent, leading to more robust science. In this iterative process, auditory displays can achieve improved reliability through robust and repeatable research. In some areas, we have already arrived at the sciencestage, while in other areas we are still exploring the possibilities.

 

Pursuing novelty encourages artists to seek the integration of different genres and transform modalities. By definition, auditory displays and sonification transform data into sound. Thanks to the characteristics of this transformation, there have been active interactions between auditory displays and various forms of art. Hence, we would also like to invite contributions addressing artistic approaches to auditory displays and auditory user interfaces.

Rather than insisting on a specific approach, we encourage a broad spectrum of diverse strategies. In other words, all these approaches ? art, design, science, and research ? should be balanced and utilized more flexibly depending on the circumstances.

 

Topics of interest for this special issue include, but are not limited to, the following:

  • Multimodal user interfaces (auditory + visual/haptic/tactile/olfactory/gustatory, etc.)
  • Auditory displays inspired by music or other forms of art
  • Culture-specific auditory displays
  • Speculative, aspirational prototype designs, case studies, and real-world applications
  • Aesthetics of auditory displays and auditory user interfaces
  • Auditory display design paradigm, theory, and taxonomy
  • Design methods, processes, tools, and techniques
  • Users, experiences, and contexts of auditory displays and auditory user interfaces
  • Development of new sensors, devices, or platforms for auditory displays and auditory user interfaces
  • Accessibility, inclusive design, and assistive technologies
  • Computational/algorithmic approaches
  • Human Factors, Ergonomics and Usability
  • Auditory displays with a focus on spatial/3D sound
  • Sonification in Health and Environmental Data (soniHED)
  • Sonification in the Internet of Things, Big Data, or Cybersecurity
  • Sonification in vehicles

 

Schedule:

Submission deadline:  May 1, 2019

1st round review:         August 1, 2019

2nd round review:        October 1, 2019

Publication:                 Spring 2020

 

Authors Instructions:

Submissions should be 8-12 pages long, presenting original unpublished work. Previously presented conference and workshop papers should include a minimum of 30% new content. The authors will be required to follow the Author?s Guide for manuscript submission to the Journal of Multimodal User Interfaces published by Springer.

(http://www.springer.com/computer/hci/journal/12193)

 

During the submission process, please select ?S.I.: Auditory Display 2018? as article type.    

 

************************************************
Myounghoon 'Philart' Jeon, Ph.D.
Associate Professor

Grado Department of Industrial and Systems Engineering 

Virginia Tech

519D Whittemore Hall  

1185 Perry St. Blacksburg, VA 24061 

myounghoonjeon@vt.edu

 

Back  Top

7-8Revue Langages

Langages n° 211 (3/2018), Fabrice Hirsch & Christelle Dodane (éds)

Organisation spatiale et temporelle des pauses en parole et en discours

http://www.revues.armand-colin.com/lettres-langues/langages/langages-ndeg-211-32018

 

Sommaire du numéro

Pages : 5-12

L?organisation spatiale et temporelle de la pause en parole et en discours

Christelle Dodane
Fabrice Hirsch

 

Pages : 13-40

Variation de la durée des pauses silencieuses : Impact de la syntaxe, du style de parole et des disfluences

Iulia Grosman
Anne Catherine Simon
Liesbeth Degand

 

Pages : 41-59

Hésitations et faux-départs dans le langage adulte et enfantin : Le rôle de la prosodie

Ester Scarpa
Christelle Dodane
Angelina Nunes de Vasconcelos

 

Pages : 61-80

Le rôle de la pause dans l?acquisition de la première syntaxe en français

Christelle Dodane
Karine Martel
Angelina Nunes de Vasconcelos

 

Pages : 81-95

Détails phonétiques dans la réalisation des pauses en Français : Etude de parole lue en langue maternelle vs en langue étrangère

Camille Fauth
Jürgen Trouvain

 

Pages : 97-109

Que révèle la pause silencieuse sur l?accessibilité cognitive d?un référent et le vieillissement langagier ?

Lucie Rousier-Vercruyssen
Anne Lacheret-Dujour
Marion Fossard

 

Pages : 111-125

Que cachent les pauses silencieuses en parole ? Une étude de cas

Fabrice Hirsch
Ivana Didirková
Camille Fauth
et al.

 

Pages : 127-141

Quand la pause devient-elle un symptôme du bégaiement ? Une étude acoustique et articulatoire

Ivana Didirková
Sébastien Le Maguer
Fabrice Hirsch

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA