ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #244  »  Journals

ISCApad #244

Friday, October 12, 2018 by Chris Wellekens

7 Journals
7-1Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction

Multimodal Interaction in Automotive Applications

=================================================

 

With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today?s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze.

In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better ? plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

 

This is why the biggest innovations in today?s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today?s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow?s cars.

In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following:

* In-vehicle multimodal interaction concepts

* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts

* Reducing driver distraction and cognitive load and demand with multimodal interaction

* (pro-active) in-car personal assistant systems

* Driver assistance systems

* Information access (search, browsing etc) in the car

* Interfaces for navigation

* Text input and output while driving

* Biometrics and physiological sensors as a user interface component

* Multimodal affective intelligent interfaces

* Multimodal automotive user-interface frameworks and toolkits

* Naturalistic/field studies of multimodal automotive user interfaces

* Multimodal automotive user-interface standards

* Detecting and estimating user intentions employing multiple modalities

 

Guest Editors

=============

Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany

Phil Cohen, Voicebox, USA

Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany

 

Submission Instructions

=======================

 

1-page abstract submission: Feb 5, 2018

Invitation for full submission: March 15, 2018

Full Submission: April 28, 2018

Notification about acceptance: June 15, 2018

Final article submission: July 15, 2018

Tentative Publication: ~ Sept 2018

 

Companion website: https://sites.google.com/view/multimodalautomotive/

 

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.

Back  Top

7-2Special issue of JSTSP on Far-Field Speech Processing in the Era of Deep Learning Speech enhancement, Separation and Recognition

Special Issue on

Far-Field Speech Processing in the Era of Deep Learning
Speech Enhancement, Separation and Recognition

Far-field speech processing has become an active field of research due to recent scientific advancements and its widespread use in commercial products. This field of research deals with speech enhancement and recognition using one or more microphones placed at a distance from one or more speakers. Although the topic has been studied for a long time, recent successful applications (starting with Amazon Echo) and challenge activities (CHiME and REVERB) greatly accelerated progress in this field. Concurrently, deep learning has created a new paradigm that has led to major breakthroughs both in front-end signal enhancement, extraction, and separation, as well as in back-end speech recognition. Furthermore more deep learning provides a means of jointly optimizing all components of far-field speech processing in an end-to-end fashion. This special Issue is a forum to gather the latest findings in this very active field of research, which is of high relevance for the audio and acoustics, speech and language, and machine learning for signal processing communities. This issue is an official post-activity of the ICASSP 2018 special session 'Multi-Microphone Speech Recognition' and the 5th CHiME Speech Separation and Recognition Challenge (CHiME-5 challenge).

Topics of interest in this special issue include (but are not limited to): 

  • Multi-/single-channel speech enhancement (e.g., dereverberation, noise reduction, separation)
  • Multi-/single-channel noise robust speech recognition
  • Far-field speech processing systems
  • End-to-end integration of speech enhancement, separation, and/or recognition
Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.

Guest Editors

  • Shinji Watanabe, Johns Hopkins University, USA
  • Shoko Araki, NTT, Japan
  • Michiel Bacchiani, Google, USA
  • Reinhold Haeb-Umbach, Paderborn University, Germany
  • Michael L. Seltzer, Facebook, USA

Important Dates

  • Submission deadline November 10, 2018
  • 1st review completed: January 15, 2019
  • Revised manuscript due: March 15, 2019
  • 2nd review completed: May 15, 2019
  • Final manuscript due: June 15, 2019
  • Publication: August 2019



 
 
Back  Top

7-3IEEE J-STSP Special Issue on Acoustic Source Localization and Tracking in Dynamic Real-life Scenes

IEEE J-STSP Special Issue on

Acoustic Source Localization and Tracking
in Dynamic Real-life Scenes

 

Summary: Acoustic source localization is a well-studied topic in signal processing, but most traditional methods incorporate simplifying assumptions such as a point source, free-field propagation of the sound wave, static acoustic sources, time-invariant sensor constellations, and simple noise fields. However, these assumptions may be seriously violated in a range of emerging applications, such as audio recording with mobile devices (e.g. cell phones, extreme cameras, and robots), video conferencing on the go, and recording for 3D reproduction and virtual reality. In these applications, the environment is extremely challenging, with spatially distributed sources, reverberation, complex noise fields, multiple concurrent speakers, interferences, and time-varying sources and sensors positions.

The proposed special issue aims to present recent advances in the development of signal processing methods for localization and tracking of acoustic sources and the associated theory and applications. To address the challenges raised by the real-life environment, novel methods that use modern array processing, speech processing and data inference tools, become a necessity.

As these challenges involve both audio processing and sensor arrays, this proposal is timely and relevant to researchers from both acoustic signal processing domain and array processing domain. The guest editors are therefore coming from both communities. It comprises of current and past chairs of the respective technical committees (Audio and Acoustic Signal Processing ? AASP and Sensor and Sensor Array and Multichannel ? SAM TCs). This special issue proposal follows successful special sessions in major conferences: 'Learning-based Sound Source Localization and Spatial Information Retrieval' ICASSP2016), 'Speaker localization in dynamic real-life environments? (ICASSP2017), ?Acoustic Scene Analysis and Signal Enhancement using Microphone Array? (EUSIPCO2017), ?Acoustical Signal Processing for Hearables? (EUSIPCO2017).

Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.

Topics of interest include (but are not limited to):
  • Localization in multipath and reverberant environments
  • Tracking moving sources
  • Performance bounds on localization and tracking
  • Dynamic platforms for localization and tracking (e.g. robots, cell phones, hearables, hearing aids)
  • Binaural localization using head-related transfer functions (HRTFs)
  • Localization ?behind the walls?
  • Distributed algorithms for wireless acoustic sensor networks (WASNs) for localization
  • Simultaneous localization and mapping (SLAM)
  • Acoustic scene analysis
  • Detection and localization acoustic events (e.g. falls, footsteps)
  • Learning-based localization (e.g. dimensionality-reduction, regression, clustering, classification, sparse representations, dictionary learning)
  • Bayesian localization and tracking algorithms (e.g. sequential Monte-Carlo, particle filters, Probability Hypothesis Density (PHD) methods)
  • Localization using ray tracing
 
List of Guest Editors:
  • Sharon Gannot
    Bar-Ilan University, Israel
  •  Martin Haardt
    Technische Universität Ilmenau, Germany
  • Walter Kellermann
    Friedrich-Alexander Universität Erlangen-Nürnberg, Germany
  • Peter Willett
    University of Connecticut, USA
Important Dates (suggested):
  • Manuscript submission: July 1, 2018 (EXTENDED)*
  • 1st review completed: September 1, 2018
  • Revised manuscript due: November 1, 2018
  • 2nd review completed: December 15, 2018
  • Final manuscript due: January 15, 2019
  • Publication: March 2019
*The manuscript submission due date to this special issue has been extended by two months, to 1 Jul 18, to allow interested authors to use the dataset from the LOCATA challenge organized by the AASP TC. The use of this dataset is not mandatory.
Back  Top

7-4Joural of Selected Topics in Signal processing, Special issue on Data Science

Special Issue on Data Science:

Machine Learning for Audio Signal Processing

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the 'era of machine learning.' There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

This special issue aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques in the area of sound and music signal processing, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:

  • audio information retrieval using machine learning;
  • audio synthesis with given contextual or musical constraints using machine learning;
  • audio source separation using machine learning;
  • audio transformations (e.g., sound morphing, style transfer) using machine learning;
  • unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
  • applications/optimization of generative adversarial networks for audio;
  • cognitively inspired machine learning models of sound cognition;
  • mathematical foundations of machine learning for audio signal processing.

This call addresses audio signal processing for speech, acoustic scenes, and music.

Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript with the web submission system.

 

 

Important Dates

 

  • Submission deadline: October 1, 2018
  • 1st review completed: December 1, 2018
  • Revised manuscript due: February 1, 2019
  • 2nd review completed: March 1, 2019
  • Final manuscript due: April 1, 2019
  • Publication: May 2019

Guest Editors


Back  Top

7-5Topical Issue on Intelligent Methods for Textual Information Retrieval

Topical Issue on Intelligent Methods for Textual Information Retrieval
============================
Call for Papers: www.degruyter.com/page/1831
Submission deadline: 20th February 2019
============================
EDITED BY
Guest Editors:
Adrian-Gabriel Chifu, Aix-Marseille Université, France
Sébastien Fournier, Aix-Marseille Université, France
Advisory Editor:
Patrice Bellot, Aix-Marseille Université / CNRS, France
============================
DESCRIPTION
Machine learning approaches for intelligent text mining and retrieval are actively
studied by researchers in natural language processing, information retrieval and other
related fields. While supervised methods usually attain much better performance than
unsupervised methods, they also require annotated data which is not always available or
easy to obtain. Hence, we encourage the submission of supervised, unsupervised or hybrid
methods for intelligent text retrieval tasks. Methods studying alternative learning
paradigms, e.g. semi-supervised learning, weakly-supervised learning, zero-shot learning,
but also transfer learning, are very welcome as well.
This thematic special issue covers three research areas: natural language processing,
computational linguistics and information retrieval. The submissions may address, but are
not limited to, the following topics:
?        information retrieval
?        information extraction
?        query processing
?         word sense disambiguation/discrimination
?         machine learning in NLP
?         sentiment analysis and opinion mining
?         contradiction and controversy detection
?         social media
?         summarization
?         text mining
?         text categorization and clustering
The submitted papers will undergo peer review process before they can be accepted.
Notification of acceptance will be communicated as we progress with the review process.
============================
HOW TO SUBMIT
Before submission authors should carefully read the Instruction for Authors:
www.degruyter.com/view/supplement/s22991093_Instruction_for_Authors.pdf
Manuscripts can be written in TeX, LaTeX (strongly recommended) - the journal?s LATEX
template. Please note that we do not accept papers in Plain TEX format. Text files can be
also submitted as standard DOCUMENT (.DOC) which is acceptable if the submission in LATEX
is not possible. For an initial submission, the authors are strongly advised to upload
their entire manuscript, including tables and figures, as a single PDF file.

All submissions to the Topical Issue must be made electronically via online submission
system Editorial Manager:
www.editorialmanager.com/opencs/
All manuscripts will undergo the standard peer-review procedure (single blind, at least
two independent reviews). When entering your submission via online submission system
please choose the article type ?TI on Information Retrieval?.

The deadline for submission is 20th February 2019, but individual papers will be reviewed
and published online on an ongoing basis.

Contributors to the Topical Issue will benefit from:
?        NO submission and publication FEES
?         indexation by Clarivate Analytics - Web of Science (ESCI) and SCOPUS
?         fair and constructive peer review provided by experts in the field
?         no space constraints
?         convenient, web-based paper submission and tracking system ? Editorial Manager
?         free language assistance for authors from non-English speaking regions
?         fast online publication upon completing the publishing process
?         better visibility due to Open Access
?         long-term preservation of the content (articles archived in Portico)
?         extensive post-publication promotion for selected papers
In case of any questions please contact Dr. Justyna ?uk, Managing Editor,
Justyna.Zuk@degruyteropen.com.We are looking forward to your submission.

Back  Top

7-6Journal on Special Topics in Signal Processing: special issue on Machine Learning for Audio Signal Processing

Special Issue on Data Science:

Machine Learning for Audio Signal Processing

Audio signal processing is currently undergoing a paradigm change, where data-driven machine learning is replacing hand-crafted feature design. This has led some to ask whether audio signal processing is still useful in the 'era of machine learning.' There are many challenges, new and old, including the interpretation of learned models in high dimensional spaces, problems associated with data-poor domains, adversarial examples, high computational requirements, and research driven by companies using large in-house datasets that is ultimately not reproducible.

This special issue aims to promote progress, systematization, understanding, and convergence of applying machine learning in the area of audio signal processing. Specifically, we are interested in work that demonstrates novel applications of machine learning techniques in the area of sound and music signal processing, as well as methodological considerations of merging machine learning with audio signal processing. We seek contributions in, but not limited to, the following topics:

  • audio information retrieval using machine learning;
  • audio synthesis with given contextual or musical constraints using machine learning;
  • audio source separation using machine learning;
  • audio transformations (e.g., sound morphing, style transfer) using machine learning;
  • unsupervised learning, online learning, one-shot learning, reinforcement learning, and incremental learning for audio;
  • applications/optimization of generative adversarial networks for audio;
  • cognitively inspired machine learning models of sound cognition;
  • mathematical foundations of machine learning for audio signal processing.

This call addresses audio signal processing for speech, acoustic scenes, and music.

Prospective authors should follow the instructions given on the IEEE JSTSP webpages and submit their manuscript with the web submission system.

Important Dates

  • Submission deadline: October 1, 2018
  • 1st review completed: December 1, 2018
  • Revised manuscript due: February 1, 2019
  • 2nd review completed: March 1, 2019
  • Final manuscript due: April 1, 2019
  • Publication: May 2019

 

Guest Editors

Back  Top

7-7Revue TIPA: Emo-langages : Vers une approche transversale des langages dans leurs dynamiques émotionnelles et créatives

APPEL À CONTRIBUTIONS

 

Revue TIPA n°35, 2019
Travaux interdisciplinaires sur la parole et le langage

https://journals.openedition.org/tipa/

 

Emo-langages : Vers une approche transversale des langages dans leurs dynamiques émotionnelles et créatives

 

Éditrice invitée

 

Françoise Berdal-Masuy
ECLE (Emotion and Creativity in Language Education)

 

Les émotions font, depuis une petite vingtaine d’années, l’objet de l’attention des chercheurs dans de nombreuses disciplines jusqu’à donner naissance à un nouveau domaine de recherche, celui des « sciences affectives » (Sander, 2015). Les propositions du neuroscientifique et philosophe Antonio Damasio (1995) sur les liens entre raison et émotions ont contribué à la prise de conscience de la synergie de celles-ci dans la communication, y compris l’apprentissage et l’enseignement. En 2011, le linguiste Christian Plantin publie « Les bonnes raisons des émotions », qui va stimuler de nouvelles approches dans les sciences du langage, désormais attentives à mettre en lumière le rôle « raisonnable ou raisonné » joué par les émotions dans le discours. De nombreux événements scientifiques consacrés à ce sujet durant la seule année académique 2017-2018 témoignent de cet intérêt : « Emotissage : Affects dans l’enseignement-apprentissage des langues » du 4 au 7 juillet 2017 à Louvain-la-Neuve (Belgique), « Affect in language » le 2 février 2018 à Helsinki (Finlande), « Affects, émotions et expressivité en discours spécialisés » le 2 mars 2018 à Lyon (France), « Langage et émotions » les 25 et 26 juin 2018 à Montpellier (France) et « Language, Education and Emotions » du 26 au 28 novembre 2018 à Anvers (Belgique) (liste non exhaustive). Le thème est appelé à évoluer constamment.

Réhabilités par les neurosciences, les affects et leur dimension culturelle sont ainsi considérés comme partie prenante de la construction de notre représentation du monde. Cet essor de la recherche sur les émotions / affects dépasse largement le plan méthodologique et théorique, il va de pair avec les mutations de la société et l’incapacité des modèles existants à y répondre. Une deuxième dimension est dès lors convoquée, rendue nécessaire par un monde complexe et changeant, celle de la créativité afin que, conjuguée à la dimension affective, elle rende possible l’innovation (Lison, 2018) qui donne vie au futur.

C’est donc la triade « émotions », « créativité » et « langages » et l’étude de leur interaction dynamique qui fait l’objet de ce numéro.

Les émotions peuvent être décrites en psychologie selon deux approches principales : soit de façon catégorielle en distinguant les émotions primaires (la joie, la tristesse, la surprise, etc.) des émotions secondaires en lien avec la relation sociale (la honte, la jalousie, le bonheur, etc.), soit selon une approche dimensionnelle sur deux axes : celui de la valence, qui peut être positive comme négative, et celui du niveau d’éveil, qui peut correspondre à une faible ou à une forte activation physiologique (Botella, 2015).

Quant à la créativité, il faut se tourner vers l’approche multivariée en psychologie différentielle de Lubart (2015) pour découvrir que, fruit de plusieurs facteurs (cognitif, conatif, environnemental et émotionnel), elle est définie comme la capacité à produire un produit ou un processus « nouveau » et « adapté » au contexte, le facteur émotionnel jouant un rôle important dans le processus créatif (Capron Puozzo, 2015).

Enfin, le terme « langage » comprend ici à la fois le langage verbal et corporel. « Et…, pour autant que le réductionnisme n’entraîne pas les neurosciences, celles-ci pourraient bien contribuer à expliciter davantage comment le corps vient à se nouer au langage, ou vice-versa » (Crommelinck & Lebrun, 2017, 173). Quant à Antonio Damasio, il énonce ceci : « Notre organisme contient un corps proprement dit, un système nerveux et un esprit provenant de ces deux éléments » (Damasio, 2017, 100). Il est opportun de prendre aussi en compte la réflexion des artistes de la scène (comédiens et danseurs), dans un dépassement du clivage sciences et arts via les corps qui traduisent extraordinairement le partage des émotions (Pairon, 2018).

Plusieurs travaux ont mis en évidence les liens entre émotions et langues (Dewaele, 2010 ; Pavlenko, 2014 ; Berdal-Masuy & Pairon, 2015 ; Baider, Cislaru et Coffey, 2015). Avec ce numéro, nous invitons, les membres de disciplines différentes (psychologie, neurosciences, sociologie, pédagogie, sciences du langage, champ de la réhabilitation et arts de la scène) à mobiliser des méthodologies plurielles et à partager leur apport et expérience sous forme de texte ou/ et vidéo afin de stimuler des connexions transversales originales entre émotions, créativité et langages, pour une approche stimulante des dynamiques à l’œuvre entre -et au cœur de- ces trois concepts.

Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteur(e)s »). Une longueur entre 10 et 12 pages est souhaitée pour chacun des articles, soit environ 35 000 - 48 000 caractères ou 6 000 - 8 000 mots (bibliographie, tableaux et annexes inclus). Les auteur(e)s sont prié(e)s de fournir un résumé de l’article dans la langue de l’article (français ou anglais ; entre 700 - 1300 caractères) ainsi qu’un résumé long d’environ deux pages (environ 8 000 - 9 000 caractères, dans l’autre langue : français si l’article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais).

 

Échéancier

·         2 juillet 2018 : premier appel à contribution

·         15 septembre 2018 : second appel à contribution

·         15 décembre 2018 : soumission de l’article (version 1)

·         15 février 2019 : retour du comité scientifique ; acceptation, proposition de modifications (de la version 1) ou refus

·         15 mars 2019 : soumission de la version modifiée (version 2)

·         15 mai 2019 : retour du comité (concernant la version finale)

·         15 juin 2019 : parution

Instructions pour les auteur(e)s

Merci d’envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et francoise.masuy@uclouvain :

- un fichier en .doc contenant le titre, le nom et l’affiliation de l’auteur (des auteurs)
- deux fichiers anonymes, l’un en format .doc et le deuxième en .pdf.

Pour la transcription des énoncés traduits en langue des signes, nous vous suggérons la convention suivante.
Pour davantage de détails, veuillez consulter la page des « consignes aux auteur(e)s ».


Références bibliographiques

Baider, F., Cislaru, G., & Coffey, S. 2015. Apprentissage, enseignement et affects. Le langage et l’homme. L1.

Berdal-Masuy, F., & Pairon, J. 2015. Affects et acquisition des langues. Le langage et l’homme. L2.

Botella, M. 2015. « Les émotions en psychologie : définitions et descriptions ». Dans Le langage et l’homme. L(2), 9-22.

Capron Puozzo, I. 2015. « Emotions et apprentissage des langues dans une pédagogie de la créativité ». Affects et acquisition des langues. Le langage et l’homme. L(2), 95-114.

Crommelinck, M. et Lebrun, J.-P. 2017. Un cerveau pensant : entre plasticité et stabilité. Toulouse : Eres

Damasio, A. 1995. L’erreur de Descartes – La raison des émotions. Paris : Odile Jacob.

Damasio, A. 2017. L’ordre étrange des choses. Paris : Odile Jacob.

Dewaele, J.-M. 2010. Emotions in Multiple Languages. Basingstole / Palgrave / Mac Millan.

Lison, C. 2018. « Quand j’innove, qu’est-ce-que cela donne réellement ? », Conférence plénière au colloque de Lausanne Innovation et pédagogie 15-16 février 2018.

Lubart, T. 2015. Psychologie de la créativité (2 ed.). Paris : Colin.

Pairon, J. 2018. « Perception des émotions et élaboration des cultures ». Communication présentée à la 8e édition de la conférence internationale Intercultural pragmatics & Communication, Chypre, 8-10 juin 2018.

Pavlenko, A. 2014. The Bilingual Mind And What It Tells Us about Language and Thought. Cambridge : Cambridge University Press.

Plantin, C. 2011. Les bonnes raisons des émotions. Principes et méthode pour l’étude du discours émotionné. Berne : Peter Lang.

Sander, D. 2015. Le monde des émotions. Paris : Belin.

 

Comité scientifique

Les members du groupe ECLE (Emotion and Creativity in Language Education) de l’AILA (International Association of Applied Linguistics)

Fabienne Baider, Université de Chypre

Françoise Berdal-Masuy, Université de Louvain

Marion Botella, Université Paris Descartes

Isabelle Capron Puozzo, Haute école pédagogique du canton de Vaud

Cristelle Cavalla, Université Sorbonne Nouvelle

Simon Coffey, King’s college London

Laurence Mettewie, Université de Namur

Jacqueline Pairon, Université de Louvain

Christian Plantin, Université de Lyon

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA