7-1 | IEEE Transactions on Multi-Scale Computing Systems: Special Issue on Design and Applications of Neuromorphic Computing System
IEEE Transactions on Multi-Scale Computing Systems Special Issue on Design and Applications of Neuromorphic Computing System
GUEST EDITORS: Hai (Helen) Li, hal66@pitt.edu, University of Pittsburgh Qinru Qiu, qiqiu@syr.edu, Syracuse University Yu Wang, yu-wang@mail.tsinghua.edu.cn, Tsinghua University
TOPIC SUMMARY: As artificial intelligence technology becomes pervasive in society and ubiquitous in our lives, the desire for embedded-everywhere and human-centric computational intelligence systems calls for an intelligent computation paradigm. However, the applications of machine learning and neural networks involve large, noisy, incomplete, natural data sets that do not lend themselves to convenient solutions from current systems. Neuromorphic systems that are inspired by the working mechanism of human brains possess a massively parallel architecture with closely coupled memory and computing. This special issue aims at the computing methodology and systems across multiple technology scales to accelerate the development the neuromorphic hardware systems and the adoption for machine learning applications. The topics of interest include, but are not limited to:
- Neuromorphic circuit, architectures, and systems - Hardware-software co-design and optimization - Computing system for neural network applications (machine vision, machine learning, sensor network, big data, signal processing & coding, pattern recognition, nature language processing, etc.) - Software and hardware architecture for deep learning - Bio-inspired computing model and/or hardware design
IMPORTANT DATES: Open for submissions in ScholarOne Manuscripts: November 1, 2015 Closed for submissions: January 15, 2016 Results of first round of reviews: April 30, 2016 Submission of revised manuscripts: May 31, 2016 Results of second round of reviews: July 31, 2016 Publication materials due: August 31, 2016
SUBMISSION GUIDELINES: Prospective authors are invited to submit their manuscripts electronically after the ?open for submissions? date, adhering to the IEEE Transactions on Multi-Scale Computing Systems guidelines (http://www.computer.org/portal/web/tmscs/author). Please submit your papers through the online system (https://mc.manuscriptcentral.com/tmscs-cs) and be sure to select the special issue name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by e-mail to the Guest Editors directly.
|
7-2 | Special Issue CSL on Language and Interaction Technologies for Children
Special Issue on Language and Interaction Technologies for Children The link: goo.gl/CFSjTR
Description: The purpose of this special edition of CSL is to publish the results of new research in the area of speech, text and language technology applied specifically to children?s voices, texts and applications. Children are different to adults both at the acoustic and linguistic level as well as in the way that they interact with people and technology. To address these issues appropriately, it is necessary to work across many disciplines, including cognitive science, robotics, speech processing, phonetics and linguistics, health and education.
Linguistic characteristics of children's speech are widely different from those of adults. This is manifested in their interactions, their writings and their speech. The processing of queries, texts and spoken interactions therefore opens challenging research issues on how to develop effective interaction, language, pronunciation and acoustic models for reliable processing of children?s input. The behavior of children interacting with a computer or a mobile device is also different from that of adults. When using a conversational interface for example, children have a different language strategy for initiating and guiding conversational exchanges, and may adopt different linguistic registers than adults. The aim of the special edition is to provide a platform for collecting mature research in this area.
Technical Scope: The special issue will focus on how children use text and speech in all aspects of communication, including human-human and human-computer interaction. We invite the submission of original, unpublished papers on topics including but not limited to:
- Speech Interfaces: acoustic and linguistic analysis of children's speech, discourse analysis of spoken language in child-machine interaction, age-dependent characteristics of spoken language, automatic speech recognition for children and spoken dialogue systems - Text Analysis: Analysis of complexity and accuracy in children?s text productions, understanding progression and development in orthography and syntax skills, use of vocabulary and registers or handwriting skills. - Multi-modality, Robotics and Avatars: multi-modal child-machine interaction, multi-modal input and output interfaces, including robotic interfaces, intrusive, non-intrusive devices for environmental data processing, pen or gesture/visual interfaces - User Modeling: user modeling and adaptation, usability studies accounting for age preferences in child-machine interaction - Cognitive Models: internal learning models, personality types, user-centered and participatory design - Application Areas: training systems, educational software, gaming interfaces, medical conditions, such as autism or speech disorders, diagnostic tools and (speech) therapy
Important Dates: Paper submission deadline: March 1, 2016 Target publication date: January 1, 2017
Guest Editors: Berkling Kay, Cooperative State University, Berkling@dhbw-karlsruhe.de Russell Martin, University of Birmingham, m.j.russell@bham.ac.uk Evanini Keelan, ETS, kevanini@ets.org
|
7-3 | Computer Speech and Language Journal, Special Issue on Spoken Language Understanding and Interaction
Computer Speech and Language Journal, Special Issue on Spoken Language Understanding and Interaction
For more information:
http://www.journals.elsevier.com/computer-speech-and-language/call-for-papers/special-issue-on-spoken-language-understanding-and-interacti/
|
7-4 | Numero spécial de TAL: l'éthique dans le TALP
un numéro spécial de la revue TAL sera consacré à l'éthique dans le TALP. Voici les informations, que vous retrouverez aussi sur : http://tal-57-2.sciencesconf.org/
|
7-5 | Call for papers - Journal TIPA no 32, 2016
Call for papers - Journal TIPA no 32, 2016
Tipa. Travaux interdisciplinaires sur la parole et le langage
https://tipa.revues.org/
Conflict in discourse and discourse in conflict
Guest editors: Tsuyoshi KIDA* & Laura-Anca PAREPA**
*Language and Communication Science Laboratory (LCSL)-Institute for Comparative Research in Human and Social Sciences (ICR), University of Tsukuba, Faculty of Humanities and Social Sciences
**Japan Society for the Promotion of Science (JSPS) Research Fellow, University of Tsukuba, Faculty of Humanities and Social Sciences
Description
Nowadays, conflict between individuals, countries or groups seems omnipresent. The reasons for this are numerous, be they religious, cultural, ideological, territorial, patrimonial or familial. Conflict manifests itself in numerous forms of expression and resolution, which can include diplomatic declarations, civil demonstrations, ideological clashes, family disputes, intercultural misunderstandings, lawsuits or other negotiations.
In the public or private sphere, conflict is triggered through the process of discourse being produced, disseminated, interpreted and amplified –therefore having an effect the opinions and attitudes of its receivers. At the same time, human beings are inherently endowed with the ability to manage and overcome these conflicts through lexical choice, ways of speaking, non-verbal communication, deconfliction techniques and conflict resolution methods. In other words, conflict is mediated through discourse.
The thematic concept for volume 32 of TIPA –conceived following collaboration between a linguist and an expert in political discourse – proposes to focus on the relations between discourse and conflict, within various disciplinary frameworks, in order to address the following questions: What type of discourse engenders conflict? What are the features specific to conflictual discourse in terms of prosody, semantics, pragmatics, discursive or interactional structure? How can conflict be dealt with and resolved? How can identities and images be constructed or deconstructed through speech acts? How can lexical choice influence the success or failure of strategic narratives in an official speech?
These are just some of the questions to which linguistics and language sciences, as well as other neighbouring disciplines, can be sensitive and to which the scientific community may propose comprehensive answers by engaging in interdisciplinary research.
This call for papers is open to theoretical and/or empirical contributions coming from researchers and experts from a wide range of disciplines including but not limited to: discourse analysis (political, media, forensic, international relations), pragmatics, sociolinguistics, interactional analysis, rhetoric, semantics, intercultural communication, discourse prosody, multimodality, neurolinguistics, etc.
The language of publication will be either English or French. Each article should contain a detailed two-page abstract in the other language, in order to make papers in French more accessible to English-speaking readers, and vice versa, thus insuring a larger audience for all the articles.
Important dates
June 30: due date for submission of articles
September 15: notification of acceptance
October 30: receipt of final version
December: publication.
Submission guidelines
Please send your proposal in three files to: tipa@lpl-aix.fr - one file in .doc containing the title, name and affiliation of the author(s).
- two anonymous files, in .doc and .pdf format Instructions for authors can be found at http://tipa.revues.org/222
|
7-6 | Special Issue of Journal of Electrical and Computer Engineering : Signal Processing Platforms and Algorithms for Real-life Communications and Listening to Digital Audio
Journal of Electrical and Computer Engineering
Special Issue on Signal Processing Platforms and Algorithms for Real-life Communications and Listening to Digital Audio
Call for Papers:
Design of modern electronic communication systems involves diversified scientific areas including algorithms, architectures, and hardware development. Variety of existent multimedia devices gives rise to development of platform-dependent signal processing algorithms. Their integration into existent digital environment is an urgent problem for application engineers.
Considering a wide range of applications including hearing aids, real-life communications, and listening to digital audio, the following research areas are of particular importance: advanced time-frequency representations, audio user interfaces, audio and speech enhancement, assisted listening, and perception and phonation modeling.
This special issue aims at publishing papers presenting novel methodologies and techniques (including theoretical methods, algorithms, software, and hardware) correspondent to these research areas indicated above.
Potential topics include, but are not limited to:
-
Speech modeling, analysis, and synthesis
-
Signal processing for hearing aids and natural hearing
-
Speech intelligibility improvement in noisy environment
-
Low-delay speech and audio processing
-
Automatic speech recognition
-
Text-to-speech synthesis
-
Speech-based assistive technologies
-
Hardware platforms for real-time signal processing
-
Rapid prototyping and project portability
Authors can submit their manuscripts via the Manuscript Tracking System at http://mts.hindawi.com/submit/journals/jece/signal.processing/spp/.
Journal of Electrical and Computer Engineering (Hindawi publishing corporation) is a peer-reviewed, Open Access journal (http://www.hindawi.com/journals/jece). According to the publisher policy, publishing a research article in this journal requires article processing charges ($600) that will be billed to the submitting author following the acceptance of an article for publication.
Important Dates:
-
Manuscript Due Friday, 24 June 2016
-
First Round of Reviews Friday, 16 September 2016
-
Publication Date Friday, 11 November 2016
Guest Editors:
-
Wanggen Wan, Shanghai University, Shanghai, China, wanwg@staff.shu.edu.cn
-
Manuel R. Zurera, University of Alcala, Madrid, Spain, manuel.rosa@uah.es
-
Alexey Karpov, St. Petersburg Institute for Informatics and Automation of the Russian Academy of Sciences, St. Petersburg, Russia, karpov@iias.spb.su
Web: http://www.hindawi.com/journals/jece/si/324109/cfp/
|
7-7 | CfP TAL Special issue: Natural Language Processing and Ethics
CALL FOR ARTICLES Natural Language Processing and Ethics
Natural Language Processing (NLP) has always posed ethical or legal problems. These problems are particularly sensitive in this age of Big Data and of data duplication, areas in which NLP is involved. In addition to legal and economic matters (search for patents and rights associated with data/software), there are military issues (monitoring of conversations) and social issues (the ?right to be forgotten? imposed on Google).
The crucial problem today is access to data (including sensitive) and personal privacy protection for citizens. Indeed, our domain produces applications considered to be effective for both areas (data access and protection), but without their known limitations being clear to the general public and governments.
Diversifying work on corpora has also led the community to be able to process more and more sensitive sources, be it personal data, medical data or even that of a criminal nature.
For privacy protection, anonymizing data, whether oral or written, is as much an industrial as an academic stake, with sometimes strong coverage constraints depending on the application or research needs, issues regarding the nature of the resources and the information to be anonymized, or legal limits.
Some NLP tools also join the ethical concerns, such as tools for plagiarism detection, facts checking and speaker identification. In addition, the advent of Web 2.0 and with it the development of crowdsourcing raises new questions as to the way in which to consider participants in the creation of linguistic resources.
This special issue of the TAL journal aims to highlight the NLP contributions to ethics and data protection and to uncover the limitations of the field both in terms of real possibilities (evaluation) and societal dangers.
We encourage submissions on all aspects related to ethics for and by Natural Language Processing, and in particular on the following problems or tasks:
sensitive corpus processing, including medical, police or personal data language resource production, in particular using crowdsourcing, and ethics ethical questions linked to the use of tools or the result of NLP processing ethical questions related to NLP practices quality and ways of evaluating applications and/or language resources anonymization, de-identification and re-identification of NLP corpora plagiarism detection by NLP facts checking paralinguistic and ethics, in particular speaker identification or detection of pathologies historical perspective of ethics in NLP definition of ethics as applied to NLP
We also welcome position papers on the subject.
LANGUAGE
Manuscripts may be submitted in English or French. French-speaking authors are requested to submit in French. Submissions in English are accepted only in case of one of the authors not being a French speaker.
IMPORTANT DATES
** extension ** end of March 2016 Deadline for submission end of May 2016 Notification to authors after first review beg. of July 2016 Deadline for submission of revised version mid-July 2016 Notification to authors after second review end of Sept. 2016 Deadline for submission of final version December 2016 Publication
PAPER SUBMISSION
Authors who intend to submit a paper are encouraged to upload their contribution (no more than 25 pages, PDF format) via the menu 'Paper submission' of the issue page of the journal. To do so, you will need to have an account on the Sciencesconf platform. To create an account, go to the Sciencesconf site and click on 'create account' next to the 'Connect' button at the top of the page. To submit, come back to this page, connect to you account and upload your submission.
TAL perfoms double blind reviewing. Your paper should be anonymised.
Style sheets are available for download on the Web site of the journal (http://www.atala.org/IMG/zip/tal-style.zip).
Invited editors: Karën Fort (U. Paris-Sorbonne/STIH), Gilles Adda (LIMSI-CNRS/IMMI), K. Bretonnel Cohen (U. of Colorado, School of Medicine)
REVIEWING COMMITTEE
Maxime Amblard (U. de Lorraine/LORIA) Jean-Yves Antoine (U. de Tours/LI) Philippe Blache (CNRS / LPL) Jean-François Bonastre (LIA/U. D'Avignon) Alain Couillault (U. de La Rochelle/L3i) Gaël de Chalendar (CEA LIST) Patrick Drouin (U. de Montréal/OLST) Cécile Fabre (U. de Toulouse/CLLE-ERSS) Cyril Grouin (LIMSI-CNRS) Lynette Hirschman (MITRE Corporation) Larry Hunter (U. of Colorado, School of Medicine) Nancy Ide (Vassar College/Dpt of Computer Science) Juliette Kahn (LNE) Mark Liberman (UPenn/LDC) Joseph Mariani (LIMSI-CNRS/IMMI) Yann Mathet (U. de Caen/GREYC) Claude Montacié (U. Paris-Sorbonne/STIH) Jean-Philippe Prost (U. de Montpellier/LIRMM) Rafal Rzepka (Hokkaido University/Language Media Laboratory) Björn Schuller (University of Passau) Michel Simard (National Reseach Council Canada) Mariarosaria Taddeo (Oxford Internet Institute, University of Oxford)
THE JOURNAL
TAL (Traitement Automatique des Langues) is an international journal that has been published by ATALA (Association pour le Traitement Automatique des Langues) for the past 40 years with the support of the CNRS. Over the past few years, it has become an online journal, with possibility of ordering the paper versions. This does not, in any way, affect the selection and review process.
|
7-8 | CfP Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing
Neurocomputing: Special Issue on Machine Learning for Non-Gaussian Data Processing
With the widespread explosion of sensing and computing, an increasing number of industrial applications and an ever-growing amount of academic research generate massive multi-modal data from multiple sources. Gaussian distribution is the probability distribution ubiquitously used in statistics, signal processing, and pattern recognition. However, not all the data we are processing are Gaussian distributed. It has been found in recent studies that explicitly utilizing the non-Gaussian characteristics of data (e.g., data with bounded support, data with semi-bounded support, and data with L1/L2-norm constraint) can significantly improve the performance of practical systems. Hence, it is of particular importance and interest to make thorough studies of the non-Gaussian data and the corresponding non-Gaussian statistical models (e.g., beta distribution for bounded support data, gamma distribution for semi-bounded support data, and Dirichlet/vMF distribution for data with L1/L2-norm constraint).
In order to analyze and understand such kind of non-Gaussian data, the developments of related learning theories, statistical models, and efficient algorithms become crucial. The scope of this special issue is to provide theoretical foundations and ground-breaking models and algorithms to solve this challenge.
We invite authors to submit articles to address the aspects ranging from case studies of particular problems with non-Gaussian distributed data to novel learning theories and approaches, including (but not limited to):
- Machine Learning for Non-Gaussian Statistical Models
- Non-Gaussian Pattern Learning and Feature Selection
- Sparsity-aware Learning for Non-Gaussian Data
- Visualization of Non-Gaussian Data
- Dimension Reduction and Feature Selection for Non-Gaussian Data
- Non-Gaussian Convex Optimization
- Non-Gaussian Cross Domain Analysis
- Non-Gaussian Statistical Model for Multimedia Signal Processing
- Non-Gaussian Statistical Model for Source and/or Channel Coding
- Non-Gaussian Statistical Model for Biomedical Signal Processing
- Non-Gaussian Statistical Model for Bioinformatics
- Non-Gaussian Statistical Model in Social Networks
- Platforms and Systems for Non-Gaussian Data Processing
Timeline
SUBMISSION DEADLINE: Oct 15, 2016
ACCEPTANCE DEADLINE: June 15, 2017
EXPECTED PUBLICATION DATE: Sep 15, 2017
Guest Editors
Associate Professor
Zhanyu Ma
Beijing University of Posts and Telecommunications (BUPT)
Professor
Jen-Tzung Chien
National Chiao Tung University (NCTU)
Associate Professor
Zheng-Hua Tan
Aalborg University (AAU)
Senior Lecture
Yi-Zhe Song
Queen Mary University of London (QMUL)
Postdoctoral Researcher
Jalil Taghia
Stanford University
Associate Professor
Ming Xiao
KTH ? Royal Institute of Technology
|
7-9 | IEEE Trans. on Affective Computing, Special Issue on Laughter Computing: towards machines able to deal with laughter
IEEE Transactions on Affective Computing Special Issue on Laughter Computing: towards machines able to deal with laughter
TOPIC SUMMARY: Laughter is a significant feature of human-human communication. It conveys various meanings and accompanies different emotions, such as amusement, relief, irony, or embarrassment. It has strong social dimensions: e.g., it can reduce the sense of threat in a group and facilitate sociability and cooperation. It also may have positive effects on learning, creativity, health, and well-being. Because of its relevance in human-human communication, research on laughter deserves important attention from the Affective Computing community. Several recent initiatives, such as the Special Session on Laughter at the 6th International Conference on Affective Computing and Intelligent Interaction (ACII2015) and the series of Interdisciplinary Workshops on Laughter and other Non-Verbal Vocalizations in Speech, witness the importance of the topic. Recent research projects focused on laughter by investigating automatic laughter processing, and by developing proof-of concepts, experiments, and prototypes exploiting laughter for enhancing human-computer interaction. Most research questions, however, are still unanswered. These address, for example, theoretical issues (e.g., how can laughter be modelled and analysed as a multimodal phenomenon, including non-verbal full-body expression? Which is the relation between different expressions of laughter, their perceived meanings and their social functions?), analysis (e.g., to what extent is multimodal analysis of laughter in complex social scenarios feasible and effective?), and synthesis techniques (e.g., can speech laughter be synthesized effectively?). Overcoming the lack of HCI/HRI/HHI applications that exploit the positive (as well as a critical analysis of negative) effects of laughter is also of high interest. The issue of acceptability of laughing machines, either virtual agent or robot, needs to be addressed as well. The goal of this special issue is to gather recent achievements in laughter computing in order to trigger new research directions in this field. The interest is on computational models that deal with laughter in human-computer and human-human interaction. Laughter is characterized by a complex expressive behaviour that includes major expressive modalities: auditory, facial expressions, body movements and postural attitudes, and physiological signals. This special issue aims at taking into account the multimodal nature of laughter and its variety of contexts and meanings, and providing an interdisciplinary perspective of ongoing scientific research and ICT developments.
Topics of interest include but are not limited to: ? Multimodal laughter detection and synthesis ? Computational models of laughter mimicry and contagion ? Multimodal datasets of different laughter types in both controlled and ecological context ? Laughter analysis in human-human communication ? Individual differences in the expression of laughter ? Modelling of different communicative meanings of laughter ? Laughter-based applications in HCI/HRI/HHI and future user-centric media ? Acceptability of laughter in HCI/HRI applications ? Laughter elicitation mechanisms (e.g., 'computational humour', KANSEI) ? Laughter as an expression of different emotions (e.g., amusement, embarrassment, relief, and so on)
IMPORTANT DATES: Deadline for submissions: June 24, 2016 Review results: September 16, 2016 Deadline for submission of revised manuscripts: October 14, 2016 Final reviews: November 11, 2016
GUEST EDITORS: ? M. Mancini, DIBRIS, University of Genoa (Italy), maurizio.mancini@unige.it ? R. Niewiadomski, DIBRIS, University of Genoa (Italy), radoslaw.niewiadomski@dibris.unige.it ? S. Hashimoto, SHALAB, Dept. of Applied Physics, Waseda University (Japan), shuji@waseda.jp ? M.E. Foster, School of Computing Science, University of Glasgow (Scotland, UK), maryellen.foster@glasgow.ac.uk ? S. Scherer, Institute for Creative Technologies, University of Southern California (USA), scherer@ict.usc.edu ? G. Volpe, DIBRIS, University of Genoa (Italy), gualtiero.volpe@unige.it
SUBMISSION GUIDELINES: Prospective authors are invited to submit their manuscripts electronically after the ?open for submissions? date, adhering to the IEEE Transactions on Affective Computing guidelines (http://www.computer.org/web/tac/author). Please submit your papers through the online system (https://mc.manuscriptcentral.com/taffc-cs) and be sure to select the special issue or special section name. Manuscripts should not be published or currently submitted for publication elsewhere. Please submit only full papers intended for review, not abstracts, to the ScholarOne portal. If requested, abstracts should be sent by e-mail to the Guest Editors directly.
|
7-10 | revue Études Créoles-Nouvelle série
Nous avons le plaisir de vous annoncer la parution en ligne du premier numéro de la revue Études Créoles-Nouvelle série.
Études Créoles est une revue qui existe depuis 1978 et qui a connu une parution papier régulière (1 à 2 numéros annuels). Il s'agit de la première revue du domaine disciplinaire de la créolistique qui a toujours eu une orientation pluridisciplinaire accueillant des articles sur les langues et la linguistique, les littératures, l'anthropologie et l'éducation dans les mondes créoles.
Suite au 14e Colloque des Études Créoles qui s'est tenu à Aix-en-Provence en octobre 2014 avec le soutien du Laboratoire Parole et Langage, le Comité International des Études Créoles a confié la relance de la revue à une nouvelle équipe éditoriale.
Elle est désormais sous format électronique et en libre accès : Études Créoles-Nouvelle série : Numéro 2015 l 1 : http://www.lpl-aix.fr/index.php?id=974?
|
7-11 | CFP: Machine Translation Journal/Special Issue on Spoken Language Translation( updated)
Alex Waibel (Carnegie Mellon University / Karlsruhe Institute of Technology)
Sebastian Stüker (Karlsruhe Institute of Technology)
Marcello Federico (Fondazione Bruno Kessler)
Satoshi Nakamura (Nara Institute of Science and Technology)
Hermann Ney (RWTH Aachen University)
Dekai Wu (The Hong Kong University of Science and Technology)
---------------------------------------------------------------------------
Spoken language translation (SLT) is the science of automatic translation of spoken language. It may be tempting to view spoken language as nothing more than language (as in text) with an added spoken verbalization preceding it. Translation of speech could then be achieved by simply applying automatic speech recognition (ASR or ?speech-to-text?) before applying traditional machine translation (MT).
Unfortunately, such an overly simplistic approach does not address the complexities of the problem. Not only do speech recognition errors compound with errors in machine translation, but spoken language also differs considerably in form, structure and style, so as to render the combination of two text-based components as ineffective. Moreover, automatic spoken language translation systems serve different practical goals than voice interfaces or text translators, so that integrated systems and their interfaces have to be designed carefully and appropriately (mobile, low-latency, audio-visual, online/offline, interactive, etc.) around their intended deployment.
Unlike written texts, human speech is not segmented into sentences, does not contain punctuation, is frequently ungrammatical, contains many disfluencies, or sentence fragments. Conversely, spoken language contains information about the speaker, gender, emotion, emphasis, social form and relationships and ?in the case of dialog- there is discourse structure, turn-taking, back-channeling across languages to be considered. SLT systems, therefore, need to consider a host of additional concerns related to integrated recognition and translation performance, use of social form and function, prosody, suitability and (depending on deployment) effectiveness of human interfaces, and task performance under various speed, latency, context and language resource constraints. Due to continuing improvements in underlying spoken language ASR and MT components as well as in the integrated system designs, spoken language systems have become increasingly sophisticated and can handle increasingly complex sentences, more natural environments, discourse and conversational styles, leading to a variety of successful practical deployments. In the light of 25 years of successful research and transition into practice, the MT Journal dedicates a special issue to the problem of Spoken Language Translation. We invite submissions of papers that address issues and problems pertaining to the development, design and deployment of spoken language translation systems. Papers on component technologies and methodology as well as on system designs and deployments of spoken language systems are both encouraged. --------------------------------------------------------------------------- Submission guidelines: - Authors should follow the 'Instructions for Authors' available on the MT Journal website:http://www.springer.com/computer/artificial/journal/10590 - Submissions must be limited to 25 pages (including references) - Papers should be submitted online directly on the MT journal's submission website: http://www.editorialmanager.com/coat/default.asp, indicating this special issue in ?article type?
Important dates (Modified)
- Paper submission: August 30th 2016.
- Notification to authors: October 15th 2016.
- Camera-ready*: January 15th 2016.
* tentative - depending on the number of review rounds required
|
7-12 | CfP IEEE JSTSP Special Issue on Spoofing and Countermeasures for Automatic Speaker Verification (extended deadline)
Call for Papers IEEE Journal of Selected Topics in Signal Processing Special Issue on Spoofing and Countermeasures for Automatic Speaker Verification
Automatic speaker verification (ASV) offers a low-cost and flexible biometric solution to person authentication. While the reliability of ASV systems is now considered sufficient to support mass-market adoption, there are concerns that the technology is vulnerable to spoofing, also referred to as presentation attacks. Replayed, synthesized and converted speech spoofing attacks can all project convincing, high-quality speech signals that are representative of other, specific speakers and thus present a genuine threat to the reliability of ASV systems.
Recent years have witnessed a movement in the community to develop spoofing countermeasures, or presentation attack detection (PAD) technology to help protect ASV systems from fraud. These efforts culminated in the first standard evaluation platform for the assessment of spoofing and countermeasures of automatic speaker verification ? the Automatic Speaker Verification Spoofing and Countermeasures Challenge (ASVspoof) ? which was held as a special session at Interspeech 2015.
This special issue is expected to present original papers describing the very latest developments in spoofing and countermeasures for ASV. The focus of the special issue includes, but is not limited to the following topics related to spoofing and countermeasures for ASV:
- vulnerability analysis of previously unconsidered spoofing methods; - advanced methods for standalone countermeasures; - advanced methods for joint ASV and countermeasure modelling; - information theoretic approaches for the assessment of spoofing and countermeasures; - spoofing and countermeasures in adverse acoustic and channel conditions; - generalized and speaker-dependent countermeasures; - speaker obfuscation, impersonation, de-identification, disguise, evasion and adapted countermeasures; - analysis and comparison of human performance in the face of spoofing; - new evaluation protocols, datasets, and performance metrics for the assessment of spoofing and countermeasures for ASV; - countermeasure methods using other modality or multimodality that are applicable to speaker verification
Also invited are submissions of exceptional quality with a tutorial or overview nature. Creative papers outside the areas listed above but related to the overall scope of the special issue are also welcome. Prospective authors can contact the Guest Editors to ascertain interest on such topics.
Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for submission information. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee and will be peer reviewed according to standard IEEE processes.
Important Dates: - Manuscript submission due: August 15, 2016 (extended) - First review completed: October 15, 2016 - Revised manuscript due: December 1, 2016 - Second review completed: February 1, 2017 - Final manuscript due: March 1, 2017 - Publication date: June, 2017
Guest Editors: Junichi Yamagishi, National Institute of Informatics, Japan, email: jyamagis@nii.ac.jp Nicholas Evans, EURECOM, France, email: evans@eurecom.fr Tomi Kinnunen, University of Eastern Finland, Finland, email: tomi.kinnunen@uef.fi Phillip L. De Leon, New Mexico State University & VoiceCipher, USA, email: pdeleon@nmsu.edu Isabel Trancoso, INESC-ID, Portugal, email: Isabel.Trancoso@inesc-id.pt
|
7-13 | Special Issue on Biosignal-based Spoken Communication in the IEEE/ACM Transactions on Audio, Speech, and Language Processing
Call for Papers
Special Issue on Biosignal-based Spoken Communication
in the IEEE/ACM Transactions on Audio, Speech, and Language Processing (TASLP)
Speech is a complex process emitting a wide range of biosignals, including, but not limited to, acoustics. These biosignals ? stemming from the articulators, the articulator muscle activities, the neural pathways, or the brain itself ? can be used to circumvent limitations of conventional speech processing in particular, and to gain insights into the process of speech production in general. Research on biosignal-based speech capturing and processing is a wide and very active field at the intersection of various disciplines, ranging from engineering, electronics and machine learning to medicine, neuroscience, physiology, and psychology. Consequently, a variety of methods and approaches are thoroughly investigated, aiming towards the common goal of creating biosignal-based speech processing devices and applications for everyday use, as well as for spoken communication research purposes. We aim at bringing together studies covering these various modalities, research approaches, and objectives in a Special issue of the IEEE Transactions on Audio, Speech, and Language Processing entitled Biosignal-based Spoken Communication.
For this purpose we will invite papers describing previously unpublished work in the following broad areas:
- Capturing methods for speech-related biosignals: tracing of articulatory activity (e.g. EMA, PMA, ultrasound, video), electrical biosignals (e.g. EMG, EEG, ECG, NIRS), acoustic sensors for capturing whispered / murmured speech (e.g. NAM microphone), etc.
- Signal processing for speech-related biosignals: feature extraction, denoising, source separation, etc.
- Speech recognition based on biosignals (e.g. silent speech interface, recognition in noisy environment, etc.).
- Mapping between speech-related biosignals and speech acoustics (e.g. articulatory-acoustic mapping)
- Modeling of speech units: articulatory or phonetic features, visemes, etc.
- Multi-modality and information fusion in speech recognition
- Challenges of dealing with whispered, mumbled, silently articulated, or inner speech
- Neural Representations of speech and language
- Novel approaches in physiological studies of speech planning and production
- Brain-computer-interface (BCI) for restoring speech communication
- User studies in biosignal-based speech processing
- End-to-end systems and devices
- Applications in rehabilitation and therapy
Submission Deadline: November 2016
Notification of Acceptance: January 2017
Final Manuscript Due: April 2017
Tentative Publication Date: First half of 2017
Editors:
Jonathan Brumberg (Speech-Language-Hearing Department, University of Kansas) brumberg@ku.edu
|
7-14 | CSL special issue 'Recent advances in speaker and language recognition and characterization'
Computer Speech and Language Special Issue on Recent advances in speaker and language recognition and characterization
Call for Papers
The goal of this special issue is to highlight the current state of research efforts on speaker and language recognition and characterization. New ideas about features, models, tasks, datasets or benchmarks are growing making this a particularly exciting time.
In the last decade, speaker recognition (SR) has gained importance in the field of speech science and technology, with new applications beyond forensics, such as large-scale filtering of telephone calls, automated access through voice profiles, speaker indexing and diarization, etc. Current challenges involve the use of increasingly short signals to perform verification, the need for algorithms that are robust to all kind of extrinsic variabilities, such as noise and channel conditions, but allowing for a certain amount of intrinsic variability (due to health issues, stress, etc.) and the development of countermeasures against spoofing and tampering attacks. On the other hand, language recognition (LR) has also witnessed a remarkable interest from the community as an auxiliary technology for speech recognition, dialogue systems and multimedia search engines, but specially for large-scale filtering of telephone calls. An active area of research specific to LR is dialect and accent identification. Other issues that must be dealt with in LR tasks (such as short signals, channel and environment variability, etc.) are basically the same as for SR.
The features, modeling approaches and algorithms used in SR and LR are closely related, though not equally effective, since these two tasks differ in several ways. In the last couple of years, and after the success of Deep Learning in image and speech recognition, the use of Deep Neural Networks both as feature extractors and classifiers/regressors is opening new exciting research horizons.
Until recently, speaker and language recognition technologies were mostly driven by NIST evaluation campaigns: Speaker Recognition Evaluations (SRE) and Language Recognition Evaluations (LRE), which focused on large-scale verification of telephone speech. In the last years, other initiatives (such as the 2008/2010/2012 Albayzin LRE, the 2013 SRE in Mobile Environment, the RSR2015 database or the 2015 Multi-Genre Broadcast Challenge) have widened the range of applications and the research focus. Authors are encouraged to use these benchmarks to test their ideas.
This special issue aims to cover state-of-the-art works; however, to provide readers with a state-of-the-art background on the topic, we will invite one survey paper, which will undergo peer review. Topics of interest include, but are not limited to:
o Speaker and language recognition, verification, identification o Speaker and language characterization o Features for speaker and language recognition o Speaker and language clustering o Multispeaker segmentation, detection, and diarization o Language, dialect, and accent recognition o Robustness in channels and environment o System calibration and fusion o Speaker recognition with speech recognition o Multimodal speaker recognition o Speaker recognition in multimedia content o Machine learning for speaker and language recognition o Confidence estimation for speaker and language recognition o Corpora and tools for system development and evaluation o Low-resource (lightly supervised) speaker and language recognition o Speaker synthesis and transformation o Human and human-assisted recognition of speaker and language o Spoofing and tampering attacks: analysis and countermeasures o Forensic and investigative speaker recognition o Systems and applications
Note that all papers will go through the same rigorous review process as regular papers, with a minimum of two reviewers per paper.
Guest Editors
Eduardo Lleida University of Zaragoza, Spain Luis J. Rodríguez-Fuentes University of the Basque Country, Spain
Important dates
Submission deadline: September 16, 2016 Notifications of final decision: March 31, 2017 Scheduled publication: April, 2017
|