ISCApad #173 |
Sunday, November 11, 2012 by Chris Wellekens |
7-1 | Speech communication: Special issue on Processing Under-Resourced Languages
Correction: it is a special issue of Speech communication and not of Signal Processing as previously announced. Call for Papers Special Issue on Processing Under-Resourced Languages
The creation of language and acoustic resources, for any given spoken language, is typically a costly task. For example, a large amount of time and money is required to properly create annotated speech corpora for automatic speech recognition (ASR), domain-specific text corpora for language modeling (LM), etc. The development of speech technologies (ASR, Text-to-Speech) for the already high-resourced languages (such as English, French or Mandarin, for example) is less constrained by this issue and, consequently, high-performance commercial systems are already on the market. On the other hand, for under-resourced languages, the above issue is typically the main obstacle.
Given this, the scientific community’s concern with porting, adapting, or creating language and acoustic resources or even models for low-resourced languages has been growing recently and several algorithms and methods of adaptation have been proposed and experimented with. In the mean time, workshops and special sessions have been organized on this domain.
This special issue focuses on research and development of new tools based on speech technologies for less-resourced national languages, mainly, used in the following large geographical regions: Eastern Europe, South and Southeast Asia, West Asia, North Africa, Sub-Saharan Africa, South and Central America, Oceania. The special issue is open to present problems and peculiarities of targeted languages in application to spoken language technologies, including automatic speech recognition, text-to-speech, speech-to-speech translation, spoken dialogue systems in an internationalized context. When developing speech-based technologies researchers are faced with many new problems from lack of audio databases and linguistic resources (lexicons, grammars, text collections), to inefficiency of existing methods for language and acoustical modeling, and limited infrastructure for the creation of relevant resources. They often have to deal with novel linguistic phenomena that are poorly studied or researched from a speech technology perspective (for instance, clicks in southern African languages, tone in many languages of the world, language switching in multilingual systems, rich morphology, etc).
Well-written papers on speech technologies for targeted languages are encouraged, and papers describing original results (theoretical and/or experimental) obtained for under-resourced languages, but important for well-elaborated languages too, are invited as well. Good papers from any countries and any authors may be accepted if they present new speech studies concerning the languages of interest of the special issue. Submissions from countries where issues related to under-resourced languages are a practical reality, are strongly encouraged for this special issue.
Important Dates: Submission deadline: 1st August 2012 Notification of acceptance: 1st February 2013 Final manuscript due: April 2013 Tentative publication date: Summer 2013
Editors Etienne Barnard North-West University, South Africa Laurent Besacier Laboratory of Informatics of Grenoble, France Alexey Karpov SPIIRAS, Saint-Petersburg, Russia Tanja Schultz University of Karlsruhe, Germany
| ||
7-2 | Special issue ACM Transactions on Speech and Language Processing on Multiword Expressions Call for Papers ACM Transactions on Speech and Language Processing Special Issue on Multiword Expressions: from Theory to Practice and Use multiword.sf.net/tslp2011si Deadline for Submissions: May, 15th, 2012 Multiword expressions (MWEs) range over linguistic constructions like idioms (a frog in the throat, kill some time), fixed phrases (per se, by and large, rock'n roll), noun compounds (traffic light, cable car), compound verbs (draw a conclusion, go by [a name]), etc. While easily mastered by native speakers, their interpretation poses a major challenge for computational systems, due to their flexible and heterogeneous nature. Surprisingly enough, MWEs are not nearly as frequent in NLP resources (dictionaries, grammars) as they are in real-word text, where they have been reported to account for half of the entries in the lexicon of a speaker and over 70% of the terms in a domain. Thus, MWEs are a key issue and a current weakness for tasks like natural language parsing and generation, as well as real-life applications such as machine translation. In spite of several proposals for MWE representation ranging along the continuum from words-with-spaces to compositional approaches connecting lexicon and grammar, to date, it remains unclear how MWEs should be represented in electronic dictionaries, thesauri and grammars. New methodologies that take into account the type of MWE and its properties are needed for efficiently handling manually and/or automatically acquired expressions in NLP systems. Moreover, we also need strategies to represent deep attributes and semantic properties for these multiword entries. While there is no unique definition or classification of MWEs, most researchers agree on some major classes such as named entities, collocations, multiword terminology and verbal expressions. These, though, are very heterogeneous in terms of syntactic and semantic properties, and should thus be treated differently by applications. Type-dependent analyses could shed some light on the best methodologies to integrate MWE knowledge in our analysis and generation systems. Evaluation is also a crucial aspect for MWE research. Various evaluation techniques have been proposed, from manual inspection of top-n candidates to classic precision/recall measures. The use of tools and datasets freely available on the MWE community website (multiword.sf.net/PHITE.php?sitesig=FILES) is encouraged when evaluating MWE treatment. However, application-oriented techniques are needed to give a clear indication of whether the acquired MWEs are really useful. Research on the impact of MWE handling in applications such as parsing, generation, information extraction, machine translation, summarization can help to answer these questions. We call for papers that present research on theoretical and practical aspects of the computational treatment of MWEs, specifically focusing on MWEs in applications such as machine translation, information retrieval and question answering. We also strongly encourage submissions on processing MWEs in the language of social media and micro-blogs. The focus of the special issue, thus, includes, but is not limited to the following topics: * MWE treatment in applications such as the ones mentioned above; * Lexical representation of MWEs in dictionaries and grammars; * Corpus-based identification and extraction of MWEs; * Application-oriented evaluation of MWE treatment; * Type-dependent analysis of MWEs; * Multilingual applications (e.g. machine translation, bilingual dictionaries); * Parsing and generation of MWEs, especially, processing of MWEs in the language of social media and micro-blogs; * MWEs and user interaction; * MWEs in linguistic theories like HPSG, LFG and minimalism and their contribution to applications; * Relevance of research on first and second language acquisition of MWEs for applications; * Crosslinguistic studies on MWEs. Submission Procedure Authors should follow the ACM TSLP manuscript preparation guidelines described on the journal web site http://tslp.acm.org and submit an electronic copy of their complete manuscript through the journal manuscript submission site http://mc.manuscriptcentral.com/acm/tslp. Authors are required to specify that their submission is intended for this special issue by including on the first page of the manuscript and in the field 'Author's Cover Letter' the note 'Submitted for the special issue on Multiword Expressions'. Schedule Submission deadline: May, 15th, 2012 Notification of acceptance: September, 15th , 2012 Final manuscript due: November, 31st, 2012 Program Committee * Iñaki Alegria, University of the Basque Country (Spain) * Dimitra Anastasiou, University of Bremen (Germany) * Eleftherios Avramidis, DFKI GmbH (Germany) * Timothy Baldwin, University of Melbourne (Australia) * Francis Bond, Nanyang Technological University (Singapore) * Aoife Cahill, ETS (USA) * Helena Caseli, Federal University of Sao Carlos (Brazil) * Yu Tracy Chen, DFKI GmbH (Germany) * Paul Cook, University of Melbourne (Australia) * Ann Copestake, University of Cambridge (UK) * Béatrice Daille, Nantes University (France) * Gaël Dias, University of Caen Basse-Normandie (France) * Stefan Evert, University of Darmstadt (Germany) * Roxana Girju, University of Illinois at Urbana-Champaign (USA) * Chikara Hashimoto, National Institute of Information and Communications Technology (Japan) * Kyo Kageura, University of Tokyo (Japan) * Martin Kay, Stanford University and Saarland University (USA & Germany) * Su Nam Kim, University of Melbourne (Australia) * Dietrich Klakow, Saarland University (Germany) * Philipp Koehn, University of Edinburgh (UK) * Ioannis Korkontzelos, University of Manchester (UK) * Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria) * Evita Linardaki, Hellenic Open University (Greece) * Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan) * Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan) * Preslav Nakov , Qatar Foundation (Qatar) * Gertjan van Noord, University of Groningen (The Netherlands) * Diarmuid Ó Séaghdha, University of Cambridge (UK) * Jan Odijk, University of Utrecht (The Netherlands) * Pavel Pecina, Charles University (Czech Republic) * Scott Piao, Lancaster University (UK) * Thierry Poibeau, CNRS and École Normale Supérieure (France) * Maja Popovic, DFKI GmbH (Germany) * Ivan Sag, Stanford University (USA) * Agata Savary, Université François Rabelais Tours (France) * Violeta Seretan, University of Geneva (Switzerland) * Ekaterina Shutova, University of Cambridge (UK) * Joaquim Ferreira da Silva, New University of Lisbon (Portugal) * Lucia Specia, University of Wolverhampton (UK) * Sara Stymne, Linköping University (Sweden) * Stan Szpakowicz, University of Ottawa (Canada) * Beata Trawinski, University of Vienna (Austria) * Kyioko Uchiyama, National Institute of Informatics (Japan) * Ruben Urizar, University of the Basque Country (Spain) * Tony Veale, University College Dublin (Ireland) * David Vilar, DFKI GmbH (Germany) * Begoña Villada Moirón, RightNow (The Netherlands) * Tom Wasow, Stanford University (USA) * Shuly Wintner, University of Haifa (Israel) * Yi Zhang, DFKI GmbH and Saarland University (Germany) Guest Editors * Valia Kordoni, DFKI GmbH and Saarland University (Germany) * Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil) * Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA) Contact For any inquiries regarding the special issue, please send an email to mweguesteditor@gmail.com
| ||
7-3 | CfP Special issue of EURASIP Journal on Audio, Speech, and Music Processing: Sparse Modeling for Speech and Audio Processing Call for Papers EURASIP Journal on Audio, Speech, and Music Processing Special Issue on Sparse Modeling for Speech and Audio Processing Sparse modeling and compressive sensing are rapidly developing fields in a variety of signal processing and machine learning conferences, focused on the problems of variable selection in high-dimensional datasets and signal reconstruction from few training examples. With the increasing amount of high-dimensional speech and audio data available, the need to efficiently represent and search through these data spaces is becoming of vital importance. The challenges arise from selecting highly predictive signal features and adaptively finding a dictionary which best represents the signal. Overcoming these challenges is likely to require efficient and effective algorithms, mainly focused on l1-regularized optimization, basis pursuit, Lasso sparse regression, missing data problem and various extensions. Despite the significant advances in the fields, there are a number of open issues remain when realizing sparse model in real-life applications, e.g. stability and interpretability of sparse models, model selection, group/fused sparsity, and evaluation of the results. Furthermore, sparse modeling has ubiquitous applications in speech and audio processing areas, including dimensionality reduction, model regularization, speech/audio compression/reconstruction, acoustic/audio feature selection, acoustic modeling, speech recognition, blind source separation, and many others. Our goal aims to come up with a set of new algorithms/applications and to advance the state of the arts in speech and audio processing. In light of the sufficiently growing research activities and their importance, we openly invite papers describing various aspects of sparsity modeling and related techniques as well as their successful applications. Submissions must not have been previously published and must have specific connection to audio, speech, and music processing. The topics of particular interest will include, but are not limited to: • Sparse representation and compressive sensing • Sparse modeling and regression • Sparse modeling for model regularization • Sparse modeling for speech recognition • Sparse modeling for language processing • Sparse modeling for source separation • Sparse modeling for music processing • Deep learning for sparse models • Practical applications of sparse modeling • Machine learning algorithms, techniques and applications Before submission authors should carefully read over the journal’s Instructions for Authors, which are located at http://asmp.eurasipjournals.com/authors/instructions. Prospective authors should submit an electronic copy of their complete manuscript through the SpringerOpen submission system at http://asmp.eurasipjournals.com/manuscript, according to the following timetable: Manuscript Due: June 15, 2012 Extended to October 15, 2012 First Round of Reviews: September 1, 2012 Publication Date: December 1, 2012 Guest editors: Jen-Tzung Chien (E-mail: jtchien@mail.ncku.edu.tw) National Cheng Kung University, Tainan, Taiwan Bhuvana Ramabhadran (E-mail: bhuvana@us.ibm.com) IBM T. J. Watson Research Center, Yorktown Heights, NY, USA Tomoko Matsui (E-mail: tmatsui@ism.ac.jp) The Institute of Statistical Mathematics, Tokyo, Japan
| ||
7-4 | Phonetica: Speech Production and Perception across the Segment-Prosody Divide: Data – Theory – Modelling Call for Papers
| ||
7-5 | ACM Transactions on Speech and Language Processing/Special Issue on Multiword Expressions: from Theory to Practice and Use ACM Transactions on Speech and Language Processing * Iñaki Alegria, University of the Basque Country (Spain)
* Dimitra Anastasiou, University of Bremen (Germany)
* Eleftherios Avramidis, DFKI GmbH (Germany)
* Timothy Baldwin, University of Melbourne (Australia)
* Francis Bond, Nanyang Technological University (Singapore)
* Aoife Cahill, ETS (USA)
* Helena Caseli, Federal University of Sao Carlos (Brazil)
* Yu Tracy Chen, DFKI GmbH (Germany)
* Paul Cook, University of Melbourne (Australia)
* Ann Copestake, University of Cambridge (UK)
* Béatrice Daille, Nantes University (France)
* Gaël Dias, University of Caen Basse-Normandie (France)
* Stefan Evert, University of Darmstadt (Germany)
* Roxana Girju, University of Illinois at Urbana-Champaign (USA)
* Chikara Hashimoto, National Institute of Information and Communications Technology (Japan)
* Kyo Kageura, University of Tokyo (Japan)
* Martin Kay, Stanford University and Saarland University (USA & Germany)
* Su Nam Kim, University of Melbourne (Australia)
* Dietrich Klakow, Saarland University (Germany)
* Philipp Koehn, University of Edinburgh (UK)
* Ioannis Korkontzelos, University of Manchester (UK)
* Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria)
* Evita Linardaki, Hellenic Open University (Greece)
* Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan)
* Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan)
* Preslav Nakov , Qatar Foundation (Qatar)
* Gertjan van Noord, University of Groningen (The Netherlands)
* Diarmuid Ó Séaghdha, University of Cambridge (UK)
* Jan Odijk, University of Utrecht (The Netherlands)
* Pavel Pecina, Charles University (Czech Republic)
* Scott Piao, Lancaster University (UK)
* Thierry Poibeau, CNRS and École Normale Supérieure (France)
* Maja Popovic, DFKI GmbH (Germany)
* Ivan Sag, Stanford University (USA)
* Agata Savary, Université François Rabelais Tours (France)
* Violeta Seretan, University of Geneva (Switzerland)
* Ekaterina Shutova, University of Cambridge (UK)
* Joaquim Ferreira da Silva, New University of Lisbon (Portugal)
* Lucia Specia, University of Wolverhampton (UK)
* Sara Stymne, Linköping University (Sweden)
* Stan Szpakowicz, University of Ottawa (Canada)
* Beata Trawinski, University of Vienna (Austria)
* Kyioko Uchiyama, National Institute of Informatics (Japan)
* Ruben Urizar, University of the Basque Country (Spain)
* Tony Veale, University College Dublin (Ireland)
* David Vilar, DFKI GmbH (Germany)
* Begoña Villada Moirón, RightNow (The Netherlands)
* Tom Wasow, Stanford University (USA)
* Shuly Wintner, University of Haifa (Israel)
* Yi Zhang, DFKI GmbH and Saarland University (Germany)
* Valia Kordoni, DFKI GmbH and Saarland University (Germany)
* Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil)
* Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA)
| ||
7-6 | CSL on Information Extraction and Retrieval from Spoken Documents Call for Papers
| ||
7-7 | IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'Call For Papers : IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue
on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'
http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/spoken_dialogue.pdf
Recently, there have been an array of advances in both the theory and practice of spoken dialog
systems, especially on mobile devices. On theoretical advances, foundational models and algorithms
(e.g., Partially Observable Markov Decision Process (POMDP), reinforcement learning,
Gaussian process models, etc.) have advanced the state-of-the-art on a number of fronts.
For example, techniques have been presented which improve system robustness, enabling
systems to achieve high performance even when faced with recognition inaccuracies.
Other methods have been proposed for learning from interactions, to improve performance
automatically. Still other methods have shown how systems can make use of speech input
and output incrementally and in real time, raising levels of naturalness and responsiveness.
On applications, interesting new results on spoken dialog systems are becoming available
from both research settings and deployments 'in the wild', for example on deployed services
such as Apple's Siri, Google's voice actions, Bing voice search, Nuance's Dragon Go!,
and Vlingo. Speech input is now commonplace on smart phones, and is well-established
as a convenient alternative to keyboard input, for tasks such as control of phone functionalities,
dictation of messages, and web search. Recently, intelligent personal assistants have
begun to appear, via both applications and features of the operating system. Many of these
new assistants are much more than a straightforward keyboard replacement - they are
first-class multi-modal dialogue systems that support sustained interactions, using spoken
language, over multiple turns. New system architectures and engineering algorithms have
also been investigated in research labs, which have led to more forward-looking spoken dialog
systems. This special issue seeks to draw together advances in spoken dialogue systems from
both research and industry. Submissions covering any aspect of spoken dialog systems are
welcome. Specific (but not exhaustive) topics of interest include all of the following in relation
to spoken dialogue systems and mobile interfaces: - theoretical foundations of spoken dialogue
system design, learning, evaluation, and simulation - dialog tracking, including explicit
representations of uncertainty in dialog systems, such as Bayesian networks; domain
representation and detection - dialog control, including reinforcement learning, (PO)MDPs,
decision theory, utility functions, and personalization for dialog systems - foundational
technologies for dialog systems, including acoustic models, language models, language
understanding, text-to-speech, and language generation; incremental approaches to input
and output; usage of affect - applications, settings and practical evaluations, such as voice
search, text message dictation, multi-modal interfaces, and usage while driving Papers must
be submitted online. The bulk of the issue will be original research papers, and priority will
be given to the papers with high novelty and originality. Papers providing a
tutorial/overview/survey are also welcome, although space is available for only a
limited number of papers of this type. Tutorial/overview/survey papers will be evaluated
on the basis of overall impact.
-Manuscript submission: http://mc.manuscriptcentral.com/jstsp-ieee - Information for authors: http://www.signalprocessingsociety.org/publications/periodicals/jstsp/jstsp-author-info/
Dates Extended submission date of papers: July 22, 2012
First review September 15, 2012 Revised Submission: October 10, 2012
Second review: November 10, 2012
Submission of final material: November 20, 2012
Guest editors: - Kai Yu, co-lead guest editor (Shanghai Jiao Tong University)
- Jason Williams, co-lead guest editor (Microsoft Research)
- Brahim Chaib-draa (Laval University)
- Oliver Lemon (Heriot-Watt University)
- Roberto Pieraccini (ICSI) - Olivier Pietquin (SUPELEC)
- Pascal Poupart (University of Waterloo)
- Steve Young (University of Cambridge)
| ||
7-8 | Journal of Speech Sciences (JoSS)Call for Papers Journal of Speech Sciences (JoSS) ISSN: 2236-9740 <http://journalofspeechsciences.org/> indexed in Linguistics Abstracts and Directory of Open Acess Journals Volume 2, number 2 (special issue) This is the CFP for the fourth issue of the Journal of Speech Sciences (JoSS). The JoSS covers
experimental aspects that deal with scientific aspects of speech, language and linguistic
communication processes. Coverage also includes articles dealing with pathological topics, or
articles of an interdisciplinary nature, provided that experimental and linguistic principles
underlie the work reported. Experimental approaches are emphasized in order to stimulate
the development of new methodologies, of new annotated corpora, of new techniques aiming
at fully testing current theories of speech production, perception, as well as phonetic and
phonological theories and their interfaces. This issue the journal team will receive original, previously unpublished contributions on
Corpora building for experimental prosody research or related themes. The purpose of this
Special Issue is to present new architectures, new challenges, new databases related to
experimental prosody to contribute to the exchange of data, to the elaboration of parallel
corpora, to the proposition of common environments to do cross-linguistic research. The contributions should be sent through the journal website (www.journalofspeechsciences.org)
until August 15th, 2012. The primary language of the Journal is English. Contributions in
Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page
(between 500-600 words) abstract in English be given. The goal of this policy is to ensure a
wide dissemination of quality research written in these three Romance languages.
The contributions will be reviewed by at least two independent reviewers, though the
final decision as to publication is taken by the two editors. For preparing the manuscript,
please follow the instructions at the JoSS webpage. If accepted, the authors must use the
template given in the website for preparing the paper for publication.
Important Dates Submission deadline: August 15th, 2012* Notification of acceptance: September, 2012 Final manuscript due: November, 2012 Publication date: December, 2012 * If arrived after that date, the paper will follow the schedule for the next issue. About the JoSS The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles
of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read,
download, copy, distribute, print, search, or link to the full texts of any article electronically
published in the journal. It is accessible at <http://www.journalofspeechsciences.org>. The JoSS covers experimental aspects that deal with scientific aspects of speech, language
and linguistic communication processes. The JoSS is supported by the initiative of the
Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org>.
Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and
disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as
establishing a channel between sister associations abroad. Editors Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil) Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil) E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org
| ||
7-9 | Revue TAL DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES SECOND APPEL À CONTRIBUTIONS
| ||
7-10 | CfP ACM TiiS special issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots Call for Papers Special Issue of the ACM Transactions on Interactive Intelligent Systems on MACHINE LEARNING FOR MULTIPLE MODALITIES IN INTERACTIVE SYSTEMS AND ROBOTS Main submission deadline: February 28th, 2013 http://tiis.acm.org/special-issues.html AIMS AND SCOPE This special issue will highlight research that applies machine learning to robots and other systems that interact with users through more than one modality, such as speech, touch, gestures, and vision. Interactive systems such as multimodal interfaces, robots, and virtual agents often use some combination of these modalities to communicate meaningfully. For example, a robot may coordinate its speech with its actions, taking into account visual feedback during their execution. Alternatively, a multimodal system can adapt its input and output modalities to the user's goals, workload, and surroundings. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. This special issue aims to help fill this gap. The dimensions listed below indicate the range of work that is relevant to the special issue. Each article will normally represent one or more points on each of these dimensions. In case of doubt about the relevance of your topic, please contact the special issue associate editors. TOPIC DIMENSIONS System Types - Interactive robots - Embodied virtual characters - Avatars - Multimodal systems Machine Learning Paradigms - Reinforcement learning - Active learning - Supervised learning - Unsupervised learning - Any other learning paradigm Functions to Which Machine Learning Is Applied - Multimodal recognition and understanding in dialog with users - Multimodal generation to present information through several channels - Alignment of gestures with verbal output during interaction - Adaptation of system skills through interaction with human users - Any other functions, especially combining two or all of speech, touch, gestures, and vision SPECIAL ISSUE ASSOCIATE EDITORS - Heriberto Cuayahuitl, Heriot-Watt University, UK (contact: h.cuayahuitl[at]gmail[dot]com) - Lutz Frommberger, University of Bremen, Germany - Nina Dethlefs, Heriot-Watt University, UK - Antoine Raux, Honda Research Institute, USA - Matthew Marge, Carnegie Mellon University, USA - Hendrik Zender, Nuance Communications, Germany IMPORTANT DATES - By February 28th, 2013: Submission of manuscripts - By June 12th, 2013: Notification about decisions on initial submissions - By September 10th, 2013: Submission of revised manuscripts - By November 9th, 2013: Notification about decisions on revised manuscripts - By December 9th, 2013: Submission of manuscripts with final minor changes - Starting January, 2014: Publication of the special issue on the TiiS website, in the ACM Digital Library, and subsequently as a printed issue HOW TO SUBMIT Please see the instructions for authors on the TiiS website (tiis.acm.org). ABOUT ACM TiiS TiiS (pronounced 'T double-eye S'), launched in 2010, is an ACM journal for research about intelligent systems that people interact with.
| ||
7-11 | Foundations and Trends in Signal ProcessingFoundations and Trends in Signal Processing (www.nowpublishers.com/sig) has published the following issue: Volume 5, Issue 3 Multidimensional Filter Banks and Multiscale Geometric Representations By Minh N. Do (University of Illinois at Urbana-Champaign, USA) and Yue M. Lu (Harvard University, USA) http://dx.doi.org/10.1561/2000000012 The link will take you to the article abstract. If your library has a subscription, you will be able to download the PDF of the article. To purchase the book version of this issue, go to the secure Order Form: https://www.nowpublishers.com/bookorder.aspx?doi=2000000012&product=SIG You will pay the SIG member discount price of US$35/Euro 35 (plus shipping) by quoting the Promotion Code: SIG20012. Euro prices are valid in Europe only.
|