ISCA - International Speech
Communication Association


ISCApad Archive  »  2018  »  ISCApad #240  »  Journals

ISCApad #240

Tuesday, June 12, 2018 by Chris Wellekens

7 Journals
7-1La langue des signes, c'est comme ça. Revue TIPA

Revue TIPA n°34, 2018
 
Travaux interdisciplinaires sur la parole et le langage

 http://tipa.revues.org/



LA LANGUE DES SIGNES, C?EST COMME ÇA



 

 

Langue des signes : état des lieux, description, formalisation, usages

 

Éditrice invitée
Mélanie Hamm,

Laboratoire Parole et Langage, Aix-Marseille Université


 

« La langue des signes, c?est comme ça » fait référence à l?ouvrage « Les sourds, c?est comme ça » d?Yves Delaporte (2002). Dans ce livre, le monde des sourds, la langue des signes française et ses spécificités sont décrits. Une des particularités de la langue des signes française est le geste spécifique signifiant COMME ÇA[1], expression fréquente chez les sourds, manifestant une certaine distance, respectueuse et sans jugement, vis-à-vis de ce qui nous entoure. C?est avec ce même regard ? proche de la probité scientifique simple et précise ? que nous tenterons d?approcher les langues signées.

 

Même si nous assistons à des avancées en linguistique des langues signées en général et de la langue des signes française en particulier, notamment depuis les travaux de Christian Cuxac (1983), de Harlan Lane (1991) et de Susan D. Fischer (2008), la linguistique des langues des signes reste un domaine encore peu développé. De plus, la langue des signes française est une langue en danger, menacée de disparition (Moseley, 2010 et Unesco, 2011). Mais quelle est cette langue ? Comment la définir ? Quels sont ses « mécanismes » ? Quelle est sa structure ? Comment la « considérer », sous quel angle, à partir de quelles approches ? Cette langue silencieuse met à mal un certain nombre de postulats linguistiques, comme l?universalité du phonème, et pose de nombreuses questions auxquelles il n?y a pas encore de réponses satisfaisantes. En quoi est-elle similaire et différente des langues orales ? N?appartient-elle qu?aux locuteurs sourds ? Doit-elle être étudiée, partagée, conservée, documentée comme toute langue qui appartient au patrimoine immatériel de l?humanité (Unesco, 2003) ? Comment l?enseigner et avec quels moyens ? Que raconte l?histoire à ce sujet ? Quel avenir pour les langues signées ? Que disent les premiers intéressés ? Une somme de questions ouvertes et très contemporaines?

 

Le numéro 34 de la revue Travaux Interdisciplinaires sur la Parole et le langage se propose de faire le point sur l?état de la recherche et des différents travaux relatifs à cette langue si singulière, en évitant de l?« enfermer » dans une seule discipline. Nous sommes à la recherche d?articles inédits sur les langues des signes et sur la langue des signes française en particulier. Ils proposeront description, formalisation ou encore aperçu des usages des langues signées. Une approche comparatiste entre les différentes langues des signes, des réflexions sur les variantes et les variations, des considérations sociolinguistiques, sémantiques et structurelles, une analyse de l?étymologie des signes pourront également faire l?objet d?articles. En outre, un espace sera réservé pour d?éventuels témoignages de sourds signeurs.

 

Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteurs » sur https://tipa.revues.org/222). Une longueur entre 10 et 20 pages est souhaitée pour chacun des articles, soit environ 35 000 / 80 000 caractères ou 6 000 / 12 000 mots. La taille moyenne recommandée pour chacune des contributions est d?environ 15 pages. Les auteurs sont priés de fournir un résumé de l?article dans la langue de l?article (français ou anglais ; entre 120 et 200 mots) ainsi qu?un résumé long d?environ deux pages (dans l?autre langue : français si l?article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais). Les articles proposés doivent être sous format .doc (Word) et parvenir à la revue TIPA sous forme électronique aux adresses suivantes : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr.

                                                                       

 

Références bibliographiques 

COMPANYS, Monica (2007). Prêt à signer. Guide de conversation en LSF. Angers : Éditions Monica Companys.

CUXAC, Christian (1983). Le langage des sourds. Paris : Payot.

DELAPORTE, Yves (2002). Les sourds, c?est comme ça. Paris : Maison des sciences de l?homme.

FISCHER, Susan D. (2008). Sign Languages East and West. In : Piet Van Sterkenburg, Unity and Diversity of Languages. Philadelphia/Amsterdam : John Benjamins Publishing Company.

LANE, Harlan (1991). Quand l?esprit entend. Histoire des sourds-muets. Traduction de l?américain par Jacqueline Henry. Paris : Odile Jacob.

MOSELEY, Christopher (2010). Atlas des langues en danger dans le monde. Paris : Unesco.

UNESCO (2011). Nouvelles initiatives de l?UNESCO en matière de diversité linguistique : http://fr.unesco.org/news/nouvelles-initiatives-unesco-matiere-diversite-linguistique.

UNESCO (2003). Convention de 2003 pour la sauvegarde du patrimoine culturel immatériel : http://www.unesco.org/culture/ich/doc/src/18440-FR.pdf.

           

 

Echéancier

 

Avril 2017 : appel à contribution

            Septembre 2017 : soumission de l?article (version 1)

Octobre-novembre 2017 : retour du comité ; acceptation, proposition de modifications (de la version 1) ou refus

            Fin janvier 2018 : soumission de la version modifiée (version 2)

Février 2018 : retour du comité (concernant la version 2)

            Mars / juin 2018 : soumission de la version finale

Mai / juin 2018 : parution

 

Instructions aux auteurs

 

Merci d?envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr :
            - un fichier en .doc contenant le titre, le nom et l?affiliation de l?auteur (des auteurs)

- deux fichiers anonymes, l?un en format .doc et le deuxième en .pdf,

Pour davantage de détails, les auteurs pourront suivre ce lien : http://tipa.revues.org/222       

 

 

 


[1] Voir par exemple l?image 421, page 334, dans Companys, 2007 ou photo ci-dessus.

 

                  

Back  Top

7-2Speech and Language Processing for Behavioral and Mental Health Research and Applications', Computer Speech and Language (CSL).

                                                                         Call for Papers

                     Special Issue of COMPUTER SPEECH AND LANGUAGE

Speech and Language Processing for Behavioral and Mental Health Research and Applications

The promise of speech and language processing for behavioral and mental health research and clinical applications is profound. Advances in all aspects of speech and language processing, and their integration—ranging from speech activity detection, speaker diarization, and speech recognition to various aspects of spoken language understanding and multimodal paralinguistics—offer novel tools for both scientific discovery and creating innovative ways for clinical screening, diagnostics, and intervention support. Credited to the potential for widespread impact, research sites across all continents are actively engaged in this societally important research area, tackling a rich set of challenges including the inherent multilingual and multicultural underpinnings of behavioural manifestations. The objective of this Special Issue on Speech and Language Processing for Behavioral and Mental Health Applications is to bring together and share these advances in order to shape the future of the field. It will focus on technical issues and applications of speech and language processing for behavioral and mental health applications. Original, previously unpublished submissions are encouraged within (not limited to) the following scope: 

  • Analysis of mental and behavioral states in spoken and written language 
  • Technological support for ecologically- and clinically-valid data collection and pre-processing
  • Robust automatic recognition of behavioral attributes and mental states 
  • Cross-cultural, cross linguistic, cross-domain mathematical approaches and   applications 
  • Subjectivity modelling (mental states perception and behavioral annotation) 
  • Multimodal paralinguistics (e.g., voice, face, gesture) 
  • Neural-mechanisms, physiological response, and interplay with expressed behaviours 
  • Databases and resources to support study of speech and language processing for mental health 
  • Applications: scientific mechanisms, clinical screening, diagnostics, & therapy/treatment support 
  • Example Domains: Autism spectrum disorders, addiction, family and relationship studies, major depressive disorders, suicidality, Alzheimer’s disease

Important Dates

  • Manuscript Due October 31, 2017
  • First Round of Reviews January 15, 2018
  • Second Round of Reviews     April 15, 2018
  • Publication Date June 31, 2018


Guest Editors

  • Chi-Chun Lee, National Tsing Hua University, Taiwan, cclee@ee.nthu.edu.tw
  • Julien Epps, University of New South Wales, Australia, j.epps@unsw.edu.au
  • Daniel Bone, University of Southern California, USA, dbone@usc.edu
  • Ming Li, Sun Yat-sen University, China, liming46@mail.sysu.edu.cn
  • Shrikanth Narayanan, University of Southern California, USA, shri@sipi.usc.edu

 

Submission Procedure

Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guidefor-authors#20000. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://www.evise.com/evise/jrnl/CSL When submitting your papers, authors must select VSI:SLP-Behavior-mHealth as the article type.

 

Back  Top

7-3Special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing
As guest editors of the special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing, we are happy to announce that 13 papers along with a survey article have just been published on IEEE Xplore http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6570655 (Issue 12 ? Dec. 2017).
 
Best regards 
 
Tanja Schultz, Cognitive Systems Lab, Faculty of Computer Science and Mathematics, University of Bremen, Bremen, Germany
Thomas Hueber, GIPSA-lab, CNRS/Grenoble Alpes University, Grenoble, France
Dean J. Krusienski, ASPEN Lab, Biomedical Engineering Institute, Old Dominion University, Norfolk, VA, USA
Jonathan S. Brumberg, Speech and Applied Neuroscience Lab, Speech?Language?Hearing Department, University of Kansas, Lawrence, KS, USA

 
Back  Top

7-4Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction

Multimodal Interaction in Automotive Applications

=================================================

 

With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today?s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze.

In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better ? plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.

 

This is why the biggest innovations in today?s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today?s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow?s cars.

In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following:

* In-vehicle multimodal interaction concepts

* Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts

* Reducing driver distraction and cognitive load and demand with multimodal interaction

* (pro-active) in-car personal assistant systems

* Driver assistance systems

* Information access (search, browsing etc) in the car

* Interfaces for navigation

* Text input and output while driving

* Biometrics and physiological sensors as a user interface component

* Multimodal affective intelligent interfaces

* Multimodal automotive user-interface frameworks and toolkits

* Naturalistic/field studies of multimodal automotive user interfaces

* Multimodal automotive user-interface standards

* Detecting and estimating user intentions employing multiple modalities

 

Guest Editors

=============

Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany

Phil Cohen, Voicebox, USA

Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany

 

Submission Instructions

=======================

 

1-page abstract submission: Feb 5, 2018

Invitation for full submission: March 15, 2018

Full Submission: April 28, 2018

Notification about acceptance: June 15, 2018

Final article submission: July 15, 2018

Tentative Publication: ~ Sept 2018

 

Companion website: https://sites.google.com/view/multimodalautomotive/

 

Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.

Back  Top

7-5Special issue of JSTSP on Far-Field Speech Processing in the Era of Deep Learning Speech enhancement, Separation and Recognition

Special Issue on

Far-Field Speech Processing in the Era of Deep Learning
Speech Enhancement, Separation and Recognition

Far-field speech processing has become an active field of research due to recent scientific advancements and its widespread use in commercial products. This field of research deals with speech enhancement and recognition using one or more microphones placed at a distance from one or more speakers. Although the topic has been studied for a long time, recent successful applications (starting with Amazon Echo) and challenge activities (CHiME and REVERB) greatly accelerated progress in this field. Concurrently, deep learning has created a new paradigm that has led to major breakthroughs both in front-end signal enhancement, extraction, and separation, as well as in back-end speech recognition. Furthermore more deep learning provides a means of jointly optimizing all components of far-field speech processing in an end-to-end fashion. This special Issue is a forum to gather the latest findings in this very active field of research, which is of high relevance for the audio and acoustics, speech and language, and machine learning for signal processing communities. This issue is an official post-activity of the ICASSP 2018 special session 'Multi-Microphone Speech Recognition' and the 5th CHiME Speech Separation and Recognition Challenge (CHiME-5 challenge).

Topics of interest in this special issue include (but are not limited to): 

  • Multi-/single-channel speech enhancement (e.g., dereverberation, noise reduction, separation)
  • Multi-/single-channel noise robust speech recognition
  • Far-field speech processing systems
  • End-to-end integration of speech enhancement, separation, and/or recognition
Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.

Guest Editors

  • Shinji Watanabe, Johns Hopkins University, USA
  • Shoko Araki, NTT, Japan
  • Michiel Bacchiani, Google, USA
  • Reinhold Haeb-Umbach, Paderborn University, Germany
  • Michael L. Seltzer, Facebook, USA

Important Dates

  • Submission deadline November 10, 2018
  • 1st review completed: January 15, 2019
  • Revised manuscript due: March 15, 2019
  • 2nd review completed: May 15, 2019
  • Final manuscript due: June 15, 2019
  • Publication: August 2019



 
 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA