ISCApad #193 |
Friday, July 11, 2014 by Chris Wellekens |
7-1 | Special Issue on 'Signal Processing Techniques for Assisted Listening' of IEEE SIGNAL PROCESSING MAGAZINE IEEE Signal Processing Society Special Issue IEEE SIGNAL PROCESSING MAGAZINE Special Issue on 'Signal Processing Techniques for Assisted Listening'
Aims and Scope This special issue focuses on technical challenges of assisted listening from a signal processing perspective. Prospective authors are invited to contribute tutorial and survey articles that articulate signal processing methodologies which are critical for applying assisted listening techniques to mobile phones and other communication devices. Of particular interest is the role of signal processing in combining multimedia content, voice communication and voice pick-up in various real-world settings. Tutorial and survey papers are solicited on advances in signal processing that particularly apply to the following applications:
These can include the following suggested topics as they relate to the above applications:
Submission Process Important Dates: Expected publication date for this special issue is March 2015.
Guest Editors
| ||||||||||||||||||
7-2 | CSL Special Issue on Speech and Language for Interactive RobotsCSL Special Issue on Speech and Language for Interactive RobotsAims and Scope
Speech-based communication with robots faces important challenges for their application in real world scenarios. In contrast to conventional interactive systems, a talking robot always needs to take its physical environment into account when communicating with users. This is typically unstructured, dynamic and noisy and raises important challenges. The objective of this special issue is to highlight research that applies speech and language processing to robots that interact with people through speech as the main modality of interaction. For example, a robot may need to communicate with users via distant speech recognition and understanding with constantly changing degrees of noise. Alternatively, the robot may coordinate its verbal turn-taking behaviour with its non-verbal one such as generating speech and gestures at the same time. Speech and language technologies have the potential of equipping robots so that they can interact more naturally with humans, but their effectiveness remains to be demonstrated. This special issue aims to help fill this gap. The topics listed below indicate the range of work that is relevant to this special issue, where each article will normally represent one or more topics. In case of doubt about the relevance of your topic, please contact the special issue associate editors. Topics
Special Issue Associate Editors Heriberto Cuayáhuitl, Heriot-Watt University, UK (contact: hc213@hw.ac.uk) Paper Submission All manuscripts and any supplementary materials will be submitted through Elsevier Editorial System at http://ees.elsevier.com/csl/. A detailed submission guideline is available as “Guide to Authors” at here. Please select “SI: SL4IR” as Article Type when submitting the manuscripts. For further details or a more in-depth discussion about topics or submission, please contact Guest Editors. Dates 23 May 2014: Submission of manuscripts
| ||||||||||||||||||
7-3 | CSL Special Issue on Speech Production in Speech Technologies DEADLINE EXT.CSL Special Issue on Speech Production in Speech TechnologiesThe use of speech production knowledge and data to enhance speech recognition and other technologies is being actively pursued by a number of widely dispersed research groups using different approaches. The types of speech production information may include continuous articulatory measurements or discrete-valued articulatory or phonological features. These quantities might be directly measured, manually labeled, or unobserved but considered to be latent variables in a statistical model. Applications of production-based ideas include improved speech recognition, silent speech interface, language training tools, and clinical models of speech disorders. The goal of this special issue is to highlight the current state of research efforts that use speech production data or knowledge. The range of data, techniques, and applications currently being explored is growing, and is also benefiting from new ideas in machine learning, making this a particularly exciting time for this research. A recent workshop, the 2013 Interspeech satellite workshop on Speech Production in Automatic Speech Recognition (SPASR), as well as the special session on Articulatory Data Acquisition and Processing, brought together a number of researchers in this area. This special issue will expand on the topics included in these events and beyond. Submissions focusing on research in this area are solicited. Topics of interest include, but are not limited to: - The collection, labeling, and use of speech production data - Acoustic-to-articulatory inversion - Speech production models in speech recognition, synthesis, voice conversion, and other technologies - Silent speech interfaces - Atypical speech production and pathology - Articulatory phonology and models of speech production Submission procedure Prospective authors should follow the regular guidelines of the Computer Speech and Language Journal for electronic submission (http://ees.elsevier.com/csl). During submission authors must select 'SI: Speech Production in ST' as Article Type. Review procedure All manuscripts will be submitted through the editorial submission system and will be reviewed by at least 3 experts. Schedule:
Eric Fosler-Lussier, Ohio State U., fosler@cse.ohio-state.edu Mark Hasegawa-Johnson, U. Illinois at Urbana-Champaign, jhasegaw@uiuc.edu Karen Livescu, TTI-Chicago, klivescu@ttic.edu Frank Rudzicz, U. Toronto, frank@cs.toronto.edu
| ||||||||||||||||||
7-4 | Special Issue of ACM Transactions on Accessible Computing (TACCESS) on Speech and Language Interaction for Daily Assistive Technology Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity. Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons. Topics of interest for submission to this special issue include (but are not limited to):
Submission process Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. Submissions should follow the journal's suggested writing format (http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Important dates: ◦ Extended deadline for full paper submission: 28th April 2014 ◦ Response to authors: 30th June 2014 ◦ Revised submission deadline: 31st August 2014 ◦ Notification of acceptance: 31st October 2014 ◦ Final manuscripts due: 30th November 2014
| ||||||||||||||||||
7-5 | Revue TAL: numéro spécial numéro spécial sur le traitement automatique du langage parlé (updated)
| ||||||||||||||||||
7-6 | IEEE Journal of Selected Topics in Signal Processing: Special Issue on Spatial Audio IEEE Journal of Selected Topics in Signal Processing Special Issue on Spatial Audio
Spatial audio is an area that has gained in popularity in the recent years. Audio reproduction setups evolved from the traditional two-channel loudspeaker setups towards multi-channel loudspeaker setups. Advances in acoustic signal processing even made it possible to create a surround sound listening experience using traditional stereo speakers and headphones. Finally, there has been an increased interest in creating different sound zones in the same acoustic space (also referred to as personal audio). At the same time, the computational capacity provided by mobile audio playback devices has increased significantly. These developments enable new possibilities for advanced audio signal processing, such that in the future we can record, transmit and reproduce spatial audio in ways that have not been possible before. In addition, there have been fundamental advances in our understanding of 3D audio. Due to the increasing number of different formats and reproduction systems for spatial audio, ranging from headphones to 22.2 speaker systems, it is major challenge to ensure interoperability between formats and systems, and consistent delivery of high-quality spatial audio. Therefore, the MPEG committee is in the process of establishing new standards for 3D Audio Content Delivery. The scope of this Special Issue on Spatial Audio is open to contributions ranging from the measurement and modeling of an acoustic space to reproduction and perception of spatial audio. While individual submissions may focus on any of the sub-topics listed below, papers describing a larger spatial audio signal processing systems will be considered as well. We invite authors to address some of the following spatial audio aspects:
Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee. Manuscript Submission: July 1, 2014 Guest Editors:
| ||||||||||||||||||
7-7 | CfP Journal of Natural Language Engineering - Special Issue on “Machine Translation Using Comparable Corpora”
***** Journal of Natural Language Engineering - Special Issue on “Machine Translation Using Comparable Corpora” ***** CALL FOR PAPERS Statistical machine translation based on parallel corpora has been very successful. The major search engines' translation systems, which are used by millions of people, are primarily using this approach, and it has been possible to come up with new language pairs in a fraction of the time that would be required when using more traditional rule-based methods. In contrast, research on comparable corpora is still at an earlier stage. Comparable corpora can be defined as monolingual corpora covering roughly the same subject area in different languages but without being exact translations of each other. However, despite its tremendous success, the use of parallel corpora in MT has a number of drawbacks: 1) It has been shown that translated language is somewhat different from original language, for example Klebanov & Flor showed that 'associative texture' is lost in translation. 2) As they require translation, parallel corpora will always be a far scarcer resource than comparable corpora. This is a severe drawback for a number of reasons: a) Among the about 7000 world languages, of which 600 have a written form, the vast majority are of the 'low resource' type. b) The number of possible language pairs increases with the square of the number of languages. When using parallel corpora, one bitext is needed for each language pair. When using comparable corpora, one monolingual corpus per language suffices. c) For improved translation quality, translation systems specialized on particular genres and domains are desirable. But it is far more difficult to acquire appropriate parallel rather than comparable training corpora. d) As language evolves over time, the training corpora should be updated on a regular basis. Again, this is more difficult in the parallel case. For such reasons it would be a big step forward if it were possible to base statistical machine translation on comparable rather than on parallel corpora: The acquisition of training data would be far easier, and the unnatural 'translation bias' (source language shining through) within the training data could be avoided. But is there any evidence that this is possible? Motivation for using comparable corpora in MT research comes from a cognitive perspective: Experience tells that persons who have learned a second language completely independently from their mother tongue can nevertheless translate between the languages. That is, human performance shows that there must be a way to bridge the gap between languages which does not rely on parallel data. Using parallel data for MT is of course a nice shortcut. But avoiding this shortcut by doing MT based on comparable corpora may well be a key to a better understanding of human translation, and to better MT quality. Work on comparable corpora in the context of MT has been ongoing for almost 20 years. It has turned out that this is a very hard problem to solve, but as it is among the grand challenges in multilingual NLP, interest has steadily increased. Apart from the increase in publications this can be seen from the considerable number of research projects (such as ACCURAT and TTC) which are fully or partially devoted to MT using comparable corpora. Given also the success of the workshop series on “Building and Using Comparable Corpora“ (BUCC), which is now in its seventh year, and following the publication of a related book (http://www.springer.com/computer/ai/book/978-3-642-20127-1), we think that it is now time to devote a journal special issue to this field. It is meant to bundle the latest top class research, make it available to everybody working in the field, and at the same time give an overview on the state of the art to all interested researchers. TOPICS OF INTEREST We solicit contributions including but not limited to the following topics: • Comparable corpora based MT systems (CCMTs) • Architectures for CCMTs • CCMTs for less-resourced languages • CCMTs for less-resourced domains • CCMTs dealing with morphologically rich languages • CCMTs for spoken translation • Applications of CCMTs • CCMT evaluation • Open source CCMT systems • Hybrid systems combining SMT and CCMT • Hybrid systems combining rule-based MT and CCMT • Enhancing phrase-based SMT using comparable corpora • Expanding phrase tables using comparable corpora • Comparable corpora based processing tools/kits for MT • Methods for mining comparable corpora from the Web • Applying Harris' distributional hypothesis to comparable corpora • Induction of morphological, grammatical, and translation rules from comparable corpora • Machine learning techniques using comparable corpora • Parallel corpora vs. pairs of non-parallel monolingual corpora • Extraction of parallel segments or paraphrases from comparable corpora • Extraction of bilingual and multilingual translations of single words and multi-word expressions, proper names, and named entities from comparable corpora IMPORTANT DATES December 1, 2014: Paper submission deadline February 1, 2015: Notification May 1, 2015: Deadline for revised papers July 1, 2015: Final notification September 1, 2015: Final paper due GUEST EDITORS Reinhard Rapp, Universities of Aix Marseille (France) and Mainz (Germany) Serge Sharoff, University of Leeds (UK) Pierre Zweigenbaum, LIMSI, CNRS (France) FURTHER INFORMATION Please use the following e-mail address to contact the guest editors: jnle.bucc (at) limsi (dot) fr Further details on paper submission will be made available in due course at the BUCC website: http://comparable.limsi.fr/bucc2014/bucc-introduction.html
| ||||||||||||||||||
7-8 | Numéro spécial 54-2 de la revue TAL, intitulé 'Entitées Nommées'Le numéro spécial 54-2 de la revue TAL, intitulé 'Entitées Nommées', coordonné par Sophia Ananiadou, Nathalie Friburger, Sophie Rosset est maintenant en ligne à l'adresse suivante : http://www.atala.org/-Entites-Nommees- Sommaire : Sophia Ananiadou, Nathalie Friburger, Sophie Rosset Préface http://www.atala.org/Preface,692 Damien Nouvel, Jean-Yves Antoine, Nathalie Friburger, Arnaud Soulet Fouille de règles d'annotation pour la reconnaissance d'entités nommées (http://www.atala.org/Fouille-de-regles-d-annotation) Mohamed Hatmi, Christine Jacquin, Sylvain Meignier, Emmanuel Morin, Solen Quiniou Intégration de la reconnaissance des entités nommées au processus de reconnaissance de la parole (http://www.atala.org/Integration-de-la-reconnaissance) Wei Wang, Romaric Besançon, Olivier Ferret, Brigitte Grau Extraction et regroupement de relations entre entités pour l'extraction d'information non supervisée (http://www.atala.org/Extraction-et-regroupement-de) Souhir Gahbiche-Braham, Hélène Bonneau-Maynard, François Yvon Traitement automatique des entités nommées en arabe : détection et traduction (http://www.atala.org/Traitement-automatique-des-entites) Notes de lecture http://www.atala.org/Notes-de-lecture,687 Résumés de thèses http://www.atala.org/Resumes-de-these,686
|