ISCApad #229 |
Monday, July 10, 2017 by Chris Wellekens |
7-1 | CfP Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing
With the widespread explosion of sensing and computing, an increasing number of industrial applications and an ever-growing amount of academic research generate massive multi-modal data from multiple sources. Gaussian distribution is the probability distribution ubiquitously used in statistics, signal processing, and pattern recognition. However, not all the data we are processing are Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics of data (e.g., data with bounded support, data with semi-bounded support, and data with L1/L2-norm constraint) can significantly improve the performance of practical systems. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and the corresponding non-Gaussian statistical models (e.g., beta distribution for bounded support data, gamma distribution for semi-bounded support data, and Dirichlet/vMF distribution for data with L1/L2-norm constraint).
In order to analyze and understand such kind of non-Gaussian data, the developments of related learning theories, statistical models, and efficient algorithms become crucial. The scope of this special issue is to provide theoretical foundations and ground-breaking models and algorithms to solve this challenge.
We invite authors to submit articles to address the aspects ranging from case studies of particular problems with non-Gaussian distributed data to novel learning theories and approaches, including (but not limited to):
Timeline
SUBMISSION DEADLINE: Oct 15, 2016
ACCEPTANCE DEADLINE: June 15, 2017
EXPECTED PUBLICATION DATE: Sep 15, 2017
Guest Editors
Associate Professor
Zhanyu Ma
Beijing University of Posts and Telecommunications (BUPT)
Professor
Jen-Tzung Chien
National Chiao Tung University (NCTU)
Associate Professor
Zheng-Hua Tan
Aalborg University (AAU)
Senior Lecture
Yi-Zhe Song
Queen Mary University of London (QMUL)
Postdoctoral Researcher
Jalil Taghia
Stanford University
Associate Professor
Ming Xiao
KTH ? Royal Institute of Technology
| ||||
7-2 | CFP: Machine Translation Journal/Special Issue on Spoken Language Translation( updated) ******* CFP: Machine Translation Journal ********
** Special Issue on Spoken Language Translation ** http://www.springer.com/computer/artificial/journal/10590 Guest editors: Alex Waibel (Carnegie Mellon University / Karlsruhe Institute of Technology)
Sebastian Stüker (Karlsruhe Institute of Technology) Marcello Federico (Fondazione Bruno Kessler) Satoshi Nakamura (Nara Institute of Science and Technology) Hermann Ney (RWTH Aachen University) Dekai Wu (The Hong Kong University of Science and Technology) --------------------------------------------------------------------------- Spoken language translation (SLT) is the science of automatic translation of spoken language. It may be tempting to view spoken language as nothing more than language (as in text) with an added spoken verbalization preceding it. Translation of speech could then be achieved by simply applying automatic speech recognition (ASR or ?speech-to-text?) before applying traditional machine translation (MT). Unfortunately, such an overly simplistic approach does not address the complexities of the problem. Not only do speech recognition errors compound with errors in machine translation, but spoken language also differs considerably in form, structure and style, so as to render the combination of two text-based components as ineffective. Moreover, automatic spoken language translation systems serve different practical goals than voice interfaces or text translators, so that integrated systems and their interfaces have to be designed carefully and appropriately (mobile, low-latency, audio-visual, online/offline, interactive, etc.) around their intended deployment.
| ||||
7-3 | CfP IEEE JSTSP Special Issue on Spoofing and Countermeasures for Automatic Speaker Verification (extended deadline)
| ||||
7-4 | Special Issue on Biosignal-based Spoken Communication in the IEEE/ACM Transactions on Audio, Speech, and Language Processing Call for Papers
Special Issue on Biosignal-based Spoken Communication
in the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)
Speech is a complex process emitting a wide range of biosignals, including, but not limited to, acoustics. These biosignals ? stemming from the articulators, the articulator muscle activities, the neural pathways, or the brain itself ? can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech capturing and processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches are thoroughly investigated, aiming towards the common goal of creating biosignal-based speech processing devices and applications for everyday use, as well as for spoken communication research purposes. We aim at bringing together studies covering these various modalities, research approaches, and objectives in a Special issue of the IEEE Transactions on Audio, Speech, and Language Processing entitled Biosignal-based Spoken Communication.
For this purpose we will invite papers describing previously unpublished work in the following broad areas:
Submission Deadline: November 2016
Notification of Acceptance: January 2017
Final Manuscript Due: April 2017
Tentative Publication Date: First half of 2017
Editors:
Tanja Schultz (Universität Bremen, Germany) tanja.schultz@uni-bremen.de (Lead Guest Editor)
Thomas Hueber (CNRS/GIPSA-lab, Grenoble, France) thomas.hueber@gipsa-lab.fr
Dean J. Krusienski (ASPEN Lab, Old Dominion University) dkrusien@odu.edu
Jonathan Brumberg (Speech-Language-Hearing Department, University of Kansas) brumberg@ku.edu
| ||||
7-5 | CSL special issue 'Recent advances in speaker and language recognition and characterization' Computer Speech and Language Special Issue on Call for Papers
The goal of this special issue is to highlight the current state of research efforts on speaker and language recognition and characterization. New ideas about features, models, tasks, datasets or benchmarks are growing making this a particularly exciting time. In the last decade, speaker recognition (SR) has gained importance in the field of speech science and technology, with new applications beyond forensics, such as large-scale filtering of telephone calls, automated access through voice profiles, speaker indexing and diarization, etc. Current challenges involve the use of increasingly short signals to perform verification, the need for algorithms that are robust to all kind of extrinsic variabilities, such as noise and channel conditions, but allowing for a certain amount of intrinsic variability (due to health issues, stress, etc.) and the development of countermeasures against spoofing and tampering attacks. On the other hand, language recognition (LR) has also witnessed a remarkable interest from the community as an auxiliary technology for speech recognition, dialogue systems and multimedia search engines, but specially for large-scale filtering of telephone calls. An active area of research specific to LR is dialect and accent identification. Other issues that must be dealt with in LR tasks (such as short signals, channel and environment variability, etc.) are basically the same as for SR. The features, modeling approaches and algorithms used in SR and LR are closely related, though not equally effective, since these two tasks differ in several ways. In the last couple of years, and after the success of Deep Learning in image and speech recognition, the use of Deep Neural Networks both as feature extractors and classifiers/regressors is opening new exciting research horizons. Until recently, speaker and language recognition technologies were mostly driven by NIST evaluation campaigns: Speaker Recognition Evaluations (SRE) and Language Recognition Evaluations (LRE), which focused on large-scale verification of telephone speech. In the last years, other initiatives (such as the 2008/2010/2012 Albayzin LRE, the 2013 SRE in Mobile Environment, the RSR2015 database or the 2015 Multi-Genre Broadcast Challenge) have widened the range of applications and the research focus. Authors are encouraged to use these benchmarks to test their ideas. This special issue aims to cover state-of-the-art works; however, to provide readers with a state-of-the-art background on the topic, we will invite one survey paper, which will undergo peer review. Topics of interest include, but are not limited to: o Speaker and language recognition, verification, identification o Speaker and language characterization o Features for speaker and language recognition o Speaker and language clustering o Multispeaker segmentation, detection, and diarization o Language, dialect, and accent recognition o Robustness in channels and environment o System calibration and fusion o Speaker recognition with speech recognition o Multimodal speaker recognition o Speaker recognition in multimedia content o Machine learning for speaker and language recognition o Confidence estimation for speaker and language recognition o Corpora and tools for system development and evaluation o Low-resource (lightly supervised) speaker and language recognition o Speaker synthesis and transformation o Human and human-assisted recognition of speaker and language o Spoofing and tampering attacks: analysis and countermeasures o Forensic and investigative speaker recognition o Systems and applications Note that all papers will go through the same rigorous review process as regular papers, with a minimum of two reviewers per paper. Guest Editors
Eduardo Lleida University of Zaragoza, Spain Luis J. Rodríguez-Fuentes University of the Basque Country, Spain Important dates Submission deadline: September 16, 2016 Notifications of final decision: March 31, 2017 Scheduled publication: April, 2017 More information at:
| ||||
7-6 | IEEE CIS Newsletter on Cognitive and Developmental Systems Dear colleagues,
I am happy to announce the release of the latest issue of the IEEE CIS Newsletter on Cognitive and Developmental Systems (open access). This is a biannual newsletter addressing the sciences of developmental and cognitive processes in natural and artificial organisms, from humans to robots, at the crossroads of cognitive science, developmental psychology, machine intelligence and neuroscience. It is available at: http://goo.gl/KBA9o6 Featuring dialog: === 'Moving Beyond Nature-Nurture: a Problem of Science or Communication?' == Dialog initiated by John Spencer, Mark Blumberg and David Shenk with responses from: Bob McMurray, Scott Robinson, Patrick Bateson, Eva Jablonka, Stephen Laurence and Eric Margolis, Bart de Boer, Gert Westermann, Peter Marshall, Vladimir Sloutsky, Dan Dediu, Jedebiah Allen and Mark Bickhard, Rick Dale and Anne Warlaumont and Michael Spivey.
== Topic: In spite of numerous scientific discoveries supporting the view of development as a complex multi-factored process, the discussions of development in several scientific fields and in the general public are still strongly organized around the nature/nurture distinction. Is this because there is not yet sufficient scientific evidence, or is this because the simplicity of the nature/nurture framework is much easier to communicate (or just better communicated by its supporters)? Responses show a very stimulating diversity of opinions, ranging from defending the utility of keeping the nature/nurture framing to arguing that biology has already shown its fundamental weaknesses for several decades. Call for new dialog: === 'What is Computational Reproducibility?' == Dialog initiated by Olivia Guest and Nicolas Rougier == This new dialog initiation explores questions and challenges related to openly sharing computational models, especially when they target to advance our understanding of natural phenomena in cognitive, biological or physical sciences: What is computational reproducibility? How shall codebases be distributed and included as a central element of mainstream publication venues? How to ensure computational models are well specified, reusable and understandable? Those of you interested in reacting to this dialog initiation are welcome to submit a response by November 10th, 2016. The length of each response must be between 600 and 800 words including references (contact pierre-yves.oudeyer@inria.fr). Let me remind you that all issues of the newsletter are all open-access and available at: http://icdl-epirob.org/cdsnl I wish you a stimulating reading! Best regards, Pierre-Yves Oudeyer, Editor of the IEEE CIS Newsletter on Cognitive and Developmental Systems Chair of the IEEE CIS AMD Technical Committee on Cognitive and Developmental Systems Research director, Inria Head of Flower project-team Inria and Ensta ParisTech, France
| ||||
7-7 | CfP *MULTIMEDIA TOOLS AND APPLICATIONS* Special Issue on 'Content Based Multimedia Indexing'
| ||||
7-8 | Special issue CSL on Recent advances in speaker and language recognition and characterization Computer Speech and Language Special Issue on Call for Papers
-----------------------------------------------
SUBMISSION DEADLINE EXTENDED TO OCTOBER 9, 2016
----------------------------------------------- The goal of this special issue is to highlight the current state of research efforts on speaker and language recognition and characterization. New ideas about features, models, tasks, datasets or benchmarks are growing making this a particularly exciting time. In the last decade, speaker recognition (SR) has gained importance in the field of speech science and technology, with new applications beyond forensics, such as large-scale filtering of telephone calls, automated access through voice profiles, speaker indexing and diarization, etc. Current challenges involve the use of increasingly short signals to perform verification, the need for algorithms that are robust to all kind of extrinsic variabilities, such as noise and channel conditions, but allowing for a certain amount of intrinsic variability (due to health issues, stress, etc.) and the development of countermeasures against spoofing and tampering attacks. On the other hand, language recognition (LR) has also witnessed a remarkable interest from the community as an auxiliary technology for speech recognition, dialogue systems and multimedia search engines, but specially for large-scale filtering of telephone calls. An active area of research specific to LR is dialect and accent identification. Other issues that must be dealt with in LR tasks (such as short signals, channel and environment variability, etc.) are basically the same as for SR. The features, modeling approaches and algorithms used in SR and LR are closely related, though not equally effective, since these two tasks differ in several ways. In the last couple of years, and after the success of Deep Learning in image and speech recognition, the use of Deep Neural Networks both as feature extractors and classifiers/regressors is opening new exciting research horizons. Until recently, speaker and language recognition technologies were mostly driven by NIST evaluation campaigns: Speaker Recognition Evaluations (SRE) and Language Recognition Evaluations (LRE), which focused on large-scale verification of telephone speech. In the last years, other initiatives (such as the 2008/2010/2012 Albayzin LRE, the 2013 SRE in Mobile Environment, the RSR2015 database or the 2015 Multi-Genre Broadcast Challenge) have widened the range of applications and the research focus. Authors are encouraged to use these benchmarks to test their ideas. This special issue aims to cover state-of-the-art works; however, to provide readers with a state-of-the-art background on the topic, we will invite one survey paper, which will undergo peer review. Topics of interest include, but are not limited to: o Speaker and language recognition, verification, identification o Speaker and language characterization o Features for speaker and language recognition o Speaker and language clustering o Multispeaker segmentation, detection, and diarization o Language, dialect, and accent recognition o Robustness in channels and environment o System calibration and fusion o Speaker recognition with speech recognition o Multimodal speaker recognition o Speaker recognition in multimedia content o Machine learning for speaker and language recognition o Confidence estimation for speaker and language recognition o Corpora and tools for system development and evaluation o Low-resource (lightly supervised) speaker and language recognition o Speaker synthesis and transformation o Human and human-assisted recognition of speaker and language o Spoofing and tampering attacks: analysis and countermeasures o Forensic and investigative speaker recognition o Systems and applications Note that all papers will go through the same rigorous review process as regular papers, with a minimum of two reviewers per paper. Guest Editors
Eduardo Lleida University of Zaragoza, Spain Luis J. Rodríguez-Fuentes University of the Basque Country, Spain Important dates Submission deadline (EXTENDED!!!): OCTOBER 9, 2016 Notifications of final decision: March 31, 2017 Scheduled publication: April, 2017 More information at:
| ||||
7-9 | Spêcial issue of Advances in Multimedia on EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY SPECIAL ISSUE -- CALL FOR PAPERS Hui Tian, National Huaqiao University, Xiamen, China Honggang Wang,
| ||||
7-10 | Cfp Speech Communication Virtual Special Issue: Multi-laboratory evaluation of forensic voice comparison systems under conditions reflecting those of a real forensiccase (forensic_eval_01) CALL FOR PAPERS:
| ||||
7-11 | Journal of Ambient Intelligence and Smart Environments (JAISE) - Thematic Issue on: Human-centred AmI: Cognitive Approaches, Reasoning and Learning (HCogRL) ===========================
Journal of Ambient Intelligence and Smart Environments (JAISE) - Thematic Issue on: Human-centred AmI: Cognitive Approaches, Reasoning and Learning (HCogRL)
| ||||
7-12 | CfP Special Issue of Hindawi's Advances in Multimedia: EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY EMERGING CHALLENGES AND SOLUTIONS FOR MULTIMEDIA SECURITY
| ||||
7-13 | CfP Special Issue of Speech Communication on *REALISM IN ROBUST SPEECH AND LANGUAGE PROCESSING* Speech Communication
Special Issue on *REALISM IN ROBUST SPEECH AND LANGUAGE PROCESSING*
*Deadline: May 31st, 2017* (For further information see attached)
How can you be sure that your research has actual impact in real-world applications? This is one of the major challenges currently faced in many areas of speech processing, with the migration of laboratory solutions to real-world applications, which is what we address by the term 'Realism'. Real application scenarios involve several acoustic, speaker and language variabilities which challenge the robustness of systems. As early evaluations in practical targeted scenarios are hardly feasible, many developments are actually based on simulated data, which leaves concerns for the viability of these solutions in real-world environments.
Information about which conditions are required for a dataset to be realistic and experimental evidence about which ones are actually important for the evaluation of a certain task is sparsely found in the literature. Motivated by the growing importance of robustness in commercial speech and language processing applications, this Special Issue aims to provide a venue for research advancements, recommendations for best practices, and tutorial-like papers about realism in robust speech and language processing.
Prospective authors are invited to submit original papers in areas related to the problem of realism in robust speech and language processing, including: speech enhancement, automatic speech, speaker and language recognition, language modeling, speech synthesis and perception, affective speech processing, paralinguistics, etc. Contributions may include, but are not limited to:
- Position papers from researchers or practitioners for best practice recommendations and advice regarding different kinds of real and simulated setups for a given task
- Objective experimental characterization of real scenarios in terms of acoustic conditions (reverberation, noise, sensor variability, source/sensor movement, environment change, etc)
- Objective experimental characterization of real scenarios in terms of speech characteristics (spontaneous speech, number of speakers, vocal effort, effect of age, non-neutral speech, etc)
- Objective experimental characterization of real scenarios in terms of language variability
- Real data collection protocols
- Data simulation algorithms
- New datasets suitable for research on robust speech processing
- Performance comparison on real vs. simulated datasets for a given task and a range of methods
- Analysis of advantages vs. weaknesses of simulated and/or real data, and techniques for addressing these weaknesses
Papers written by practitioners and industry researchers are especially welcomed. If there is any doubt about the suitability of your paper for this special issue, please contact us before submission.
*Submission instructions: *
Manuscript submissions shall be made through EVISE at https://www.evise.com/profile/#/SPECOM/login
Select article type 'SI:Realism Speech Processing'
*Important dates: *
March 1, 2017: Submission portal open
May 31, 2017: Paper submission
September 30, 2017: First review
November 30, 2017: Revised submission
April 30, 2018: Completion of revision process
*Guest Editors: *
Dayana Ribas, CENATAV, Cuba
Emmanuel Vincent, Inria, France
John Hansen, UTDallas, USA
| ||||
7-14 | CfP IEEE Journal of Selected Topics in Signal Processing: Special Issue on End-to-End Speech and Language Processing CALL FOR PAPERS IEEE Journal of Selected Topics in Signal Processing Special Issue on End-to-End Speech and Language Processing End-to-end (E2E) systems have achieved competitive results compared to conventional hybrid Hidden Markov-deep neural network model-based automatic speech recognition (ASR) systems. Such E2E systems are attractive because they do not require initial alignments between input acoustic features and output graphemes or words. Very deep convolutional networks and recurrent neural networks have also been very successful in ASR systems due to their added expressive power and better generalization. ASR is often not the end goal of real-world speech information processing systems. Instead, an important end goal is information retrieval, in particular keyword search (KWS), that involves retrieving speech documents containing a user-specified query from a large database. Conventional keyword search uses an ASR system as a front-end that converts the speech database into a finitestate transducer (FST) index containing a large number of likely word or sub-word sequences for each speech segment, along with associated confidence scores and time stamps. A user-specified text query is then composed with this FST index to find the putative locations of the keyword along with confidence scores. More recently, inspired by E2E approaches, ASR-free keyword search systems have been proposed with limited success. Machine learning methods have also been very successful in Question- Answering, parsing, language translation, analytics and deriving representations of morphological units, words or sentences. Challenges such as the Zero Resource Speech Challenge aim to construct systems that learn an end-to-end Spoken Dialog (SD) system, in an unknown language, from scratch, using only information available to a language learning infant (zero linguistic resources). The principal objective of the recently concluded IARPA Babel program was to develop a keyword search system that delivers high accuracy for any new language given very limited transcribed speech, noisy acoustic and channel conditions, and limited system build time of one to four weeks. This special issue will showcase the power of novel machine learning methods not only for ASR, but for keyword search and for the general processing of speech and language. Topics of interest in the special issue include (but are not limited to): • Novel end-to-end speech and language processing • Query-by-example search • Deep learning based acoustic and word representations • Query-by-example search • Question answering systems • Multilingual dialogue systems • Multilingual representation learning • Low and zero resource speech processing • Deep learning based ASR-free keyword search • Deep learning based media retrieval • Kernel methods applied to speech and language processing • Acoustic unit discovery • Computational challenges for deep end-to-end systems • Adaptation strategies for end to end systems • Noise robustness for low resource speech recognition systems • Spoken language processing: speech to speech translation, speech retrieval, extraction, and summarization • Machine learning methods applied to morphological, syntactic, and pragmatic analysis • Computational semantics: document analysis, topic segmentation, categorization, and modeling • Named entity recognition, tagging, chunking, and parsing • Sentiment analysis, opinion mining, and social media analytics • Deep learning in human computer interaction Dates: Manuscript submission: April 1, 2017 First review completed: June 1, 2017 Revised Manuscript Due: July 15, 2017 Second Review Completed: August 15, 2017 Final Manuscript Due: September 15, 2017 Publication: December. 2017 Guest Editors: Nancy F. Chen, Institute for Infocomm Research (I2R), A*STAR, Singapore Mary Harper, Army Research Laboratory, USA Brian Kingsbury, IBM Watson, IBM T.J. Watson Research Center, USA Kate Knill, Cambridge University, U.K. Bhuvana Ramabhadran, IBM Watson, IBM T.J. Watson Research Center, USA
| ||||
7-15 | Travaux Interdisciplinaires sur la Parole et le Langage, TIPA L'équipe de rédaction des TIPA a le plaisir de vous annoncer la parution du dernier numéro de la revue sur Revues.org : Travaux Interdisciplinaires sur la Parole et le Langage, TIPA n° 32 I 2016 : Ce numéro sera complété par le n° 33 I 2017 qui portera sur la même thématique :
| ||||
7-16 | CfP IEEE Journal of Selected Topics in Signal Processing/ Special Issue on End-to-End Speech and Language Processing Call for Papers IEEE Journal of Selected Topics in Signal Processing
|
Top |
Revue TIPA n°34, 2018
Travaux interdisciplinaires sur la parole et le langage
http://tipa.revues.org/
LA LANGUE DES SIGNES, C?EST COMME ÇA
Langue des signes : état des lieux, description, formalisation, usages
Éditrice invitée
Mélanie Hamm,
Laboratoire Parole et Langage, Aix-Marseille Université
« La langue des signes, c?est comme ça » fait référence à l?ouvrage « Les sourds, c?est comme ça » d?Yves Delaporte (2002). Dans ce livre, le monde des sourds, la langue des signes française et ses spécificités sont décrits. Une des particularités de la langue des signes française est le geste spécifique signifiant COMME ÇA[1], expression fréquente chez les sourds, manifestant une certaine distance, respectueuse et sans jugement, vis-à-vis de ce qui nous entoure. C?est avec ce même regard ? proche de la probité scientifique simple et précise ? que nous tenterons d?approcher les langues signées.
Même si nous assistons à des avancées en linguistique des langues signées en général et de la langue des signes française en particulier, notamment depuis les travaux de Christian Cuxac (1983), de Harlan Lane (1991) et de Susan D. Fischer (2008), la linguistique des langues des signes reste un domaine encore peu développé. De plus, la langue des signes française est une langue en danger, menacée de disparition (Moseley, 2010 et Unesco, 2011). Mais quelle est cette langue ? Comment la définir ? Quels sont ses « mécanismes » ? Quelle est sa structure ? Comment la « considérer », sous quel angle, à partir de quelles approches ? Cette langue silencieuse met à mal un certain nombre de postulats linguistiques, comme l?universalité du phonème, et pose de nombreuses questions auxquelles il n?y a pas encore de réponses satisfaisantes. En quoi est-elle similaire et différente des langues orales ? N?appartient-elle qu?aux locuteurs sourds ? Doit-elle être étudiée, partagée, conservée, documentée comme toute langue qui appartient au patrimoine immatériel de l?humanité (Unesco, 2003) ? Comment l?enseigner et avec quels moyens ? Que raconte l?histoire à ce sujet ? Quel avenir pour les langues signées ? Que disent les premiers intéressés ? Une somme de questions ouvertes et très contemporaines?
Le numéro 34 de la revue Travaux Interdisciplinaires sur la Parole et le langage se propose de faire le point sur l?état de la recherche et des différents travaux relatifs à cette langue si singulière, en évitant de l?« enfermer » dans une seule discipline. Nous sommes à la recherche d?articles inédits sur les langues des signes et sur la langue des signes française en particulier. Ils proposeront description, formalisation ou encore aperçu des usages des langues signées. Une approche comparatiste entre les différentes langues des signes, des réflexions sur les variantes et les variations, des considérations sociolinguistiques, sémantiques et structurelles, une analyse de l?étymologie des signes pourront également faire l?objet d?articles. En outre, un espace sera réservé pour d?éventuels témoignages de sourds signeurs.
Les articles soumis à la revue TIPA sont lus et évalués par le comité de lecture de la revue. Ils peuvent être rédigés en français ou en anglais et présenter des images, photos et vidéos (voir « consignes aux auteurs » sur https://tipa.revues.org/222). Une longueur entre 10 et 20 pages est souhaitée pour chacun des articles, soit environ 35 000 / 80 000 caractères ou 6 000 / 12 000 mots. La taille moyenne recommandée pour chacune des contributions est d?environ 15 pages. Les auteurs sont priés de fournir un résumé de l?article dans la langue de l?article (français ou anglais ; entre 120 et 200 mots) ainsi qu?un résumé long d?environ deux pages (dans l?autre langue : français si l?article est en anglais et vice versa), et 5 mots-clés dans les deux langues (français-anglais). Les articles proposés doivent être sous format .doc (Word) et parvenir à la revue TIPA sous forme électronique aux adresses suivantes : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr.
Références bibliographiques :
COMPANYS, Monica (2007). Prêt à signer. Guide de conversation en LSF. Angers : Éditions Monica Companys.
CUXAC, Christian (1983). Le langage des sourds. Paris : Payot.
DELAPORTE, Yves (2002). Les sourds, c?est comme ça. Paris : Maison des sciences de l?homme.
FISCHER, Susan D. (2008). Sign Languages East and West. In : Piet Van Sterkenburg, Unity and Diversity of Languages. Philadelphia/Amsterdam : John Benjamins Publishing Company.
LANE, Harlan (1991). Quand l?esprit entend. Histoire des sourds-muets. Traduction de l?américain par Jacqueline Henry. Paris : Odile Jacob.
MOSELEY, Christopher (2010). Atlas des langues en danger dans le monde. Paris : Unesco.
UNESCO (2011). Nouvelles initiatives de l?UNESCO en matière de diversité linguistique : http://fr.unesco.org/news/nouvelles-initiatives-unesco-matiere-diversite-linguistique.
UNESCO (2003). Convention de 2003 pour la sauvegarde du patrimoine culturel immatériel : http://www.unesco.org/culture/ich/doc/src/18440-FR.pdf.
Echéancier
Avril 2017 : appel à contribution
Septembre 2017 : soumission de l?article (version 1)
Octobre-novembre 2017 : retour du comité ; acceptation, proposition de modifications (de la version 1) ou refus
Fin janvier 2018 : soumission de la version modifiée (version 2)
Février 2018 : retour du comité (concernant la version 2)
Mars / juin 2018 : soumission de la version finale
Mai / juin 2018 : parution
Instructions aux auteurs
Merci d?envoyer 3 fichiers sous forme électronique à : tipa@lpl-aix.fr et melanie.hamm@lpl-aix.fr :
- un fichier en .doc contenant le titre, le nom et l?affiliation de l?auteur (des auteurs)
- deux fichiers anonymes, l?un en format .doc et le deuxième en .pdf,
Pour davantage de détails, les auteurs pourront suivre ce lien : http://tipa.revues.org/222
[1] Voir par exemple l?image 421, page 334, dans Companys, 2007 ou photo ci-dessus.
Top |
Call for Papers
Special Issue of COMPUTER SPEECH AND LANGUAGE
Speech and Language Processing for Behavioral and Mental Health Research and Applications
The promise of speech and language processing for behavioral and mental health research and clinical applications is profound. Advances in all aspects of speech and language processing, and their integration—ranging from speech activity detection, speaker diarization, and speech recognition to various aspects of spoken language understanding and multimodal paralinguistics—offer novel tools for both scientific discovery and creating innovative ways for clinical screening, diagnostics, and intervention support. Credited to the potential for widespread impact, research sites across all continents are actively engaged in this societally important research area, tackling a rich set of challenges including the inherent multilingual and multicultural underpinnings of behavioural manifestations. The objective of this Special Issue on Speech and Language Processing for Behavioral and Mental Health Applications is to bring together and share these advances in order to shape the future of the field. It will focus on technical issues and applications of speech and language processing for behavioral and mental health applications. Original, previously unpublished submissions are encouraged within (not limited to) the following scope:
Important Dates
Guest Editors
Submission Procedure
Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site https://www.elsevier.com/journals/computer-speech-and-language/0885-2308/guidefor-authors#20000. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://www.evise.com/evise/jrnl/CSL When submitting your papers, authors must select VSI:SLP-Behavior-mHealth as the article type.
Top |