ISCApad #241 |
Tuesday, July 10, 2018 by Chris Wellekens |
7-1 | La langue des signes, c'est comme ça. Revue TIPA Revue TIPA n°34, 2018 LA LANGUE DES SIGNES, C?EST COMME ÇA
Langue des signes : état des lieux, description, formalisation, usages
Éditrice invitée
« La langue des signes, c?est comme ça » fait référence à l?ouvrage « Les sourds, c?est comme ça » d?Yves Delaporte (2002). Dans ce livre, le monde des sourds, la langue des signes française et ses spécificités sont décrits. Une des particularités de la langue des signes française est le geste spécifique signifiant COMME ÇA[1], expression fréquente chez les sourds, manifestant une certaine distance, respectueuse et sans jugement, vis-à-vis de ce qui nous entoure. C?est avec ce même regard ? proche de la probité scientifique simple et précise ? que nous tenterons d?approcher les langues signées.
Même si nous assistons à des avancées en linguistique des langues signées en général et de la langue des signes française en particulier, notamment depuis les travaux de Christian Cuxac (1983), de Harlan Lane (1991) et de Susan D. Fischer (2008), la linguistique des langues des signes reste un domaine encore peu développé. De plus, la langue des signes française est une langue en danger, menacée de disparition (Moseley, 2010 et Unesco, 2011). Mais quelle est cette langue ? Comment la définir ? Quels sont ses « mécanismes » ? Quelle est sa structure ? Comment la « considérer », sous quel angle, à partir de quelles approches ? Cette langue silencieuse met à mal un certain nombre de postulats linguistiques, comme l?universalité du phonème, et pose de nombreuses questions auxquelles il n?y a pas encore de réponses satisfaisantes. En quoi est-elle similaire et différente des langues orales ? N?appartient-elle qu?aux locuteurs sourds ? Doit-elle être étudiée, partagée, conservée, documentée comme toute langue qui appartient au patrimoine immatériel de l?humanité (Unesco, 2003) ? Comment l?enseigner et avec quels moyens ? Que raconte l?histoire à ce sujet ? Quel avenir pour les langues signées ? Que disent les premiers intéressés ? Une somme de questions ouvertes et très contemporaines?
Le numéro 34 de la revue Travaux Interdisciplinaires sur la Parole et le langage se propose de faire le point sur l?état de la recherche et des différents travaux relatifs à cette langue si singulière, en évitant de l?« enfermer » dans une seule discipline. Nous sommes à la recherche d?articles inédits sur les langues des signes et sur la langue des signes française en particulier. Ils proposeront description, formalisation ou encore aperçu des usages des langues signées. Une approche comparatiste entre les différentes langues des signes, des réflexions sur les variantes et les variations, des considérations sociolinguistiques, sémantiques et structurelles, une analyse de l?étymologie des signes pourront également faire l?objet d?articles. En outre, un espace sera réservé pour d?éventuels témoignages de sourds signeurs.
Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteurs » sur https://tipa.revues.org/222). Une longueur entre 10 et 20 pages est souhaitée pour chacun des articles, soit environ 35 000 / 80 000 caractères ou 6 000 / 12 000 mots. La taille moyenne recommandée pour chacune des contributions est d?environ 15 pages. Les auteurs sont priés de fournir un résumé de l?article dans la langue de l?article (français ou anglais ; entre 120 et 200 mots) ainsi qu?un résumé long d?environ deux pages (dans l?autre langue : français si l?article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais). Les articles proposés doivent être sous format .doc (Word) et parvenir à la revue TIPA sous forme électronique aux adresses suivantes : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr.
Références bibliographiques : COMPANYS, Monica (2007). Prêt à signer. Guide de conversation en LSF. Angers : Éditions Monica Companys. CUXAC, Christian (1983). Le langage des sourds. Paris : Payot. DELAPORTE, Yves (2002). Les sourds, c?est comme ça. Paris : Maison des sciences de l?homme. FISCHER, Susan D. (2008). Sign Languages East and West. In : Piet Van Sterkenburg, Unity and Diversity of Languages. Philadelphia/Amsterdam : John Benjamins Publishing Company. LANE, Harlan (1991). Quand l?esprit entend. Histoire des sourds-muets. Traduction de l?américain par Jacqueline Henry. Paris : Odile Jacob. MOSELEY, Christopher (2010). Atlas des langues en danger dans le monde. Paris : Unesco. UNESCO (2011). Nouvelles initiatives de l?UNESCO en matière de diversité linguistique : http://fr.unesco.org/news/nouvelles-initiatives-unesco-matiere-diversite-linguistique. UNESCO (2003). Convention de 2003 pour la sauvegarde du patrimoine culturel immatériel : http://www.unesco.org/culture/ich/doc/src/18440-FR.pdf.
Echéancier
Avril 2017 : appel à contribution Septembre 2017 : soumission de l?article (version 1) Octobre-novembre 2017 : retour du comité ; acceptation, proposition de modifications (de la version 1) ou refus Fin janvier 2018 : soumission de la version modifiée (version 2) Février 2018 : retour du comité (concernant la version 2) Mars / juin 2018 : soumission de la version finale Mai / juin 2018 : parution
Instructions aux auteurs
Merci d?envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr : - deux fichiers anonymes, l?un en format .doc et le deuxième en .pdf, Pour davantage de détails, les auteurs pourront suivre ce lien : http://tipa.revues.org/222
[1] Voir par exemple l?image 421, page 334, dans Companys, 2007 ou photo ci-dessus.
| ||||||||||
7-2 | Speech and Language Processing for Behavioral and Mental Health Research and Applications', Computer Speech and Language (CSL). Call for Papers Special Issue of COMPUTER SPEECH AND LANGUAGE Speech and Language Processing for Behavioral and Mental Health Research and Applications The promise of speech and language processing for behavioral and mental health research and clinical applications is profound. Advances in all aspects of speech and language processing, and their integration—ranging from speech activity detection, speaker diarization, and speech recognition to various aspects of spoken language understanding and multimodal paralinguistics—offer novel tools for both scientific discovery and creating innovative ways for clinical screening, diagnostics, and intervention support. Credited to the potential for widespread impact, research sites across all continents are actively engaged in this societally important research area, tackling a rich set of challenges including the inherent multilingual and multicultural underpinnings of behavioural manifestations. The objective of this Special Issue on Speech and Language Processing for Behavioral and Mental Health Applications is to bring together and share these advances in order to shape the future of the field. It will focus on technical issues and applications of speech and language processing for behavioral and mental health applications. Original, previously unpublished submissions are encouraged within (not limited to) the following scope:
Important Dates
Guest Editors
Submission Procedure Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guidefor-authors#20000. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://www.evise.com/evise/jrnl/CSL When submitting your papers, authors must select VSI:SLP-Behavior-mHealth as the article type.
| ||||||||||
7-3 | Special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing As guest editors of the special issue on Biosignal-Based Spoken Communication in IEEE/ACM Transactions on Audio, Speech, and Language Processing, we are happy to announce that 13 papers along with a survey article have just been published on IEEE Xplore http://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=6570655 (Issue 12 ? Dec. 2017).
Best regards
Tanja Schultz, Cognitive Systems Lab, Faculty of Computer Science and Mathematics, University of Bremen, Bremen, Germany
Thomas Hueber, GIPSA-lab, CNRS/Grenoble Alpes University, Grenoble, France
Dean J. Krusienski, ASPEN Lab, Biomedical Engineering Institute, Old Dominion University, Norfolk, VA, USA
Jonathan S. Brumberg, Speech and Applied Neuroscience Lab, Speech?Language?Hearing Department, University of Kansas, Lawrence, KS, USA
| ||||||||||
7-4 | Special issue on Multimodal Interaction in Automotive Applications , Springer Journal on Multimodal Interaction Multimodal Interaction in Automotive Applications =================================================
With the smartphone becoming ubiquitous, pervasive distributed computing is becoming a reality. Increasingly, aspects of the internet of things find their way into many aspects of our daily lives. Users are interacting multimodally with their smartphones and expectations with regard to natural interaction have increased dramatically in the past years. Even more, users have started to project these expectations towards all kind of interfaces encountered in their daily lives. Currently, these expectations are not yet fully met by car manufacturers since the automotive development cycles are still much longer compared to software industry. However, the clear trend is that manufacturers add technology to cars to deliver on their vision and promise of a safer drive. Multiple modalities are already available in today?s dashboards, including haptic controllers, touch screens, 3D gestures, voice, secondary displays, and gaze. In fact, car manufacturers are aiming for a personal assistant with deep understanding of the car and an ability to meet driving-related demands and non-driving-related needs to get the job done. For instance, such an assistant can naturally answer any question about the car and help schedule service when needed. It can find the preferred gas station along the route, or even better ? plan a stop and ensure to arrive in time for a meeting. It understands that a perfect business meal involves more than finding a sponsored restaurant, and includes unbiased reviews, availability, budget, trouble-free parking and notifies all invitees of the meeting time and location. Moreover, multimodality can be a source for fatigue detection. The main goal for multimodal interaction and driver assistance systems is on ensuring that the driver can focus on his primary task of a safe drive.
This is why the biggest innovations in today?s cars happened in the way we interact with the integrated devices such as the infotainment system. For instance, it has been shown that voice based interaction is less distractive than interaction with visual haptic interface, but it is only one piece in the way we interact multimodally in today?s cars, shifting away from the GUI as the only source of interaction. This also leads to additional efforts to establish a mental model for the user. With the plethora of available modalities requiring multiple mental maps, learnability decreased considerably. Multimodality may also help here to decrease distraction. In the special issue we will present the challenges and opportunities of multimodal interaction to help reducing cognitive load and increase learnability as well as current research that has the potential to be employed in tomorrow?s cars. In this special issue, we especially invite researchers, scientists, and developers to submit contributions that are original and unpublished and have not been submitted to any other journal, magazine, or conference. We expect at least 30% of novel content. We are soliciting original research related to multimodal smart and interactive media technologies in areas including - but not limited to - the following: * In-vehicle multimodal interaction concepts * Multimodal Head-Up Displays (HUDs) and Augmented Reality (AR) concepts * Reducing driver distraction and cognitive load and demand with multimodal interaction * (pro-active) in-car personal assistant systems * Driver assistance systems * Information access (search, browsing etc) in the car * Interfaces for navigation * Text input and output while driving * Biometrics and physiological sensors as a user interface component * Multimodal affective intelligent interfaces * Multimodal automotive user-interface frameworks and toolkits * Naturalistic/field studies of multimodal automotive user interfaces * Multimodal automotive user-interface standards * Detecting and estimating user intentions employing multiple modalities
Guest Editors ============= Dirk Schnelle-Walka, Harman International, Connected Car Division, Germany Phil Cohen, Voicebox, USA Bastian Pfleging, Ludwig-Maximilians-Universität München, Germany
Submission Instructions =======================
1-page abstract submission: Feb 5, 2018 Invitation for full submission: March 15, 2018 Full Submission: April 28, 2018 Notification about acceptance: June 15, 2018 Final article submission: July 15, 2018 Tentative Publication: ~ Sept 2018
Companion website: https://sites.google.com/view/multimodalautomotive/
Authors are requested to follow instructions for manuscript submission to the Journal of Multimodal User Interfaces (http://www.springer.com/computer/hci/journal/12193) and to submit manuscripts at the following link: https://easychair.org/conferences/?conf=mmautomotive2018.
| ||||||||||
7-5 | Special issue of JSTSP on Far-Field Speech Processing in the Era of Deep Learning Speech enhancement, Separation and RecognitionSpecial Issue onFar-Field Speech Processing in the Era of Deep Learning
|
Top |
Summary: Acoustic source localization is a well-studied topic in signal processing, but most traditional methods incorporate simplifying assumptions such as a point source, free-field propagation of the sound wave, static acoustic sources, time-invariant sensor constellations, and simple noise fields. However, these assumptions may be seriously violated in a range of emerging applications, such as audio recording with mobile devices (e.g. cell phones, extreme cameras, and robots), video conferencing on the go, and recording for 3D reproduction and virtual reality. In these applications, the environment is extremely challenging, with spatially distributed sources, reverberation, complex noise fields, multiple concurrent speakers, interferences, and time-varying sources and sensors positions.
The proposed special issue aims to present recent advances in the development of signal processing methods for localization and tracking of acoustic sources and the associated theory and applications. To address the challenges raised by the real-life environment, novel methods that use modern array processing, speech processing and data inference tools, become a necessity.
As these challenges involve both audio processing and sensor arrays, this proposal is timely and relevant to researchers from both acoustic signal processing domain and array processing domain. The guest editors are therefore coming from both communities. It comprises of current and past chairs of the respective technical committees (Audio and Acoustic Signal Processing ? AASP and Sensor and Sensor Array and Multichannel ? SAM TCs). This special issue proposal follows successful special sessions in major conferences: 'Learning-based Sound Source Localization and Spatial Information Retrieval' ICASSP2016), 'Speaker localization in dynamic real-life environments? (ICASSP2017), ?Acoustic Scene Analysis and Signal Enhancement using Microphone Array? (EUSIPCO2017), ?Acoustical Signal Processing for Hearables? (EUSIPCO2017).
Prospective authors should follow the instructions given on the IEEE JSTSP webpage, and submit their manuscript through the web submission system.
|
|
|
Top |