ISCApad #175 |
Thursday, January 10, 2013 by Chris Wellekens |
7-1 | ACM Transactions on Speech and Language Processing/Special Issue on Multiword Expressions: from Theory to Practice and Use ACM Transactions on Speech and Language Processing * Iñaki Alegria, University of the Basque Country (Spain)
* Dimitra Anastasiou, University of Bremen (Germany)
* Eleftherios Avramidis, DFKI GmbH (Germany)
* Timothy Baldwin, University of Melbourne (Australia)
* Francis Bond, Nanyang Technological University (Singapore)
* Aoife Cahill, ETS (USA)
* Helena Caseli, Federal University of Sao Carlos (Brazil)
* Yu Tracy Chen, DFKI GmbH (Germany)
* Paul Cook, University of Melbourne (Australia)
* Ann Copestake, University of Cambridge (UK)
* Béatrice Daille, Nantes University (France)
* Gaël Dias, University of Caen Basse-Normandie (France)
* Stefan Evert, University of Darmstadt (Germany)
* Roxana Girju, University of Illinois at Urbana-Champaign (USA)
* Chikara Hashimoto, National Institute of Information and Communications Technology (Japan)
* Kyo Kageura, University of Tokyo (Japan)
* Martin Kay, Stanford University and Saarland University (USA & Germany)
* Su Nam Kim, University of Melbourne (Australia)
* Dietrich Klakow, Saarland University (Germany)
* Philipp Koehn, University of Edinburgh (UK)
* Ioannis Korkontzelos, University of Manchester (UK)
* Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria)
* Evita Linardaki, Hellenic Open University (Greece)
* Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan)
* Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan)
* Preslav Nakov , Qatar Foundation (Qatar)
* Gertjan van Noord, University of Groningen (The Netherlands)
* Diarmuid Ó Séaghdha, University of Cambridge (UK)
* Jan Odijk, University of Utrecht (The Netherlands)
* Pavel Pecina, Charles University (Czech Republic)
* Scott Piao, Lancaster University (UK)
* Thierry Poibeau, CNRS and École Normale Supérieure (France)
* Maja Popovic, DFKI GmbH (Germany)
* Ivan Sag, Stanford University (USA)
* Agata Savary, Université François Rabelais Tours (France)
* Violeta Seretan, University of Geneva (Switzerland)
* Ekaterina Shutova, University of Cambridge (UK)
* Joaquim Ferreira da Silva, New University of Lisbon (Portugal)
* Lucia Specia, University of Wolverhampton (UK)
* Sara Stymne, Linköping University (Sweden)
* Stan Szpakowicz, University of Ottawa (Canada)
* Beata Trawinski, University of Vienna (Austria)
* Kyioko Uchiyama, National Institute of Informatics (Japan)
* Ruben Urizar, University of the Basque Country (Spain)
* Tony Veale, University College Dublin (Ireland)
* David Vilar, DFKI GmbH (Germany)
* Begoña Villada Moirón, RightNow (The Netherlands)
* Tom Wasow, Stanford University (USA)
* Shuly Wintner, University of Haifa (Israel)
* Yi Zhang, DFKI GmbH and Saarland University (Germany)
* Valia Kordoni, DFKI GmbH and Saarland University (Germany)
* Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil)
* Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA)
| ||
7-2 | CSL on Information Extraction and Retrieval from Spoken Documents Call for Papers
| ||
7-3 | IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'Call For Papers : IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue
on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'
http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/spoken_dialogue.pdf
Recently, there have been an array of advances in both the theory and practice of spoken dialog
systems, especially on mobile devices. On theoretical advances, foundational models and algorithms
(e.g., Partially Observable Markov Decision Process (POMDP), reinforcement learning,
Gaussian process models, etc.) have advanced the state-of-the-art on a number of fronts.
For example, techniques have been presented which improve system robustness, enabling
systems to achieve high performance even when faced with recognition inaccuracies.
Other methods have been proposed for learning from interactions, to improve performance
automatically. Still other methods have shown how systems can make use of speech input
and output incrementally and in real time, raising levels of naturalness and responsiveness.
On applications, interesting new results on spoken dialog systems are becoming available
from both research settings and deployments 'in the wild', for example on deployed services
such as Apple's Siri, Google's voice actions, Bing voice search, Nuance's Dragon Go!,
and Vlingo. Speech input is now commonplace on smart phones, and is well-established
as a convenient alternative to keyboard input, for tasks such as control of phone functionalities,
dictation of messages, and web search. Recently, intelligent personal assistants have
begun to appear, via both applications and features of the operating system. Many of these
new assistants are much more than a straightforward keyboard replacement - they are
first-class multi-modal dialogue systems that support sustained interactions, using spoken
language, over multiple turns. New system architectures and engineering algorithms have
also been investigated in research labs, which have led to more forward-looking spoken dialog
systems. This special issue seeks to draw together advances in spoken dialogue systems from
both research and industry. Submissions covering any aspect of spoken dialog systems are
welcome. Specific (but not exhaustive) topics of interest include all of the following in relation
to spoken dialogue systems and mobile interfaces: - theoretical foundations of spoken dialogue
system design, learning, evaluation, and simulation - dialog tracking, including explicit
representations of uncertainty in dialog systems, such as Bayesian networks; domain
representation and detection - dialog control, including reinforcement learning, (PO)MDPs,
decision theory, utility functions, and personalization for dialog systems - foundational
technologies for dialog systems, including acoustic models, language models, language
understanding, text-to-speech, and language generation; incremental approaches to input
and output; usage of affect - applications, settings and practical evaluations, such as voice
search, text message dictation, multi-modal interfaces, and usage while driving Papers must
be submitted online. The bulk of the issue will be original research papers, and priority will
be given to the papers with high novelty and originality. Papers providing a
tutorial/overview/survey are also welcome, although space is available for only a
limited number of papers of this type. Tutorial/overview/survey papers will be evaluated
on the basis of overall impact.
-Manuscript submission: http://mc.manuscriptcentral.com/jstsp-ieee - Information for authors: http://www.signalprocessingsociety.org/publications/periodicals/jstsp/jstsp-author-info/
Dates Extended submission date of papers: July 22, 2012
First review September 15, 2012 Revised Submission: October 10, 2012
Second review: November 10, 2012
Submission of final material: November 20, 2012
Guest editors: - Kai Yu, co-lead guest editor (Shanghai Jiao Tong University)
- Jason Williams, co-lead guest editor (Microsoft Research)
- Brahim Chaib-draa (Laval University)
- Oliver Lemon (Heriot-Watt University)
- Roberto Pieraccini (ICSI) - Olivier Pietquin (SUPELEC)
- Pascal Poupart (University of Waterloo)
- Steve Young (University of Cambridge)
| ||
7-4 | Journal of Speech Sciences (JoSS)Call for Papers Journal of Speech Sciences (JoSS) ISSN: 2236-9740 <http://journalofspeechsciences.org/> indexed in Linguistics Abstracts and Directory of Open Acess Journals Volume 2, number 2 (special issue) This is the CFP for the fourth issue of the Journal of Speech Sciences (JoSS). The JoSS covers
experimental aspects that deal with scientific aspects of speech, language and linguistic
communication processes. Coverage also includes articles dealing with pathological topics, or
articles of an interdisciplinary nature, provided that experimental and linguistic principles
underlie the work reported. Experimental approaches are emphasized in order to stimulate
the development of new methodologies, of new annotated corpora, of new techniques aiming
at fully testing current theories of speech production, perception, as well as phonetic and
phonological theories and their interfaces. This issue the journal team will receive original, previously unpublished contributions on
Corpora building for experimental prosody research or related themes. The purpose of this
Special Issue is to present new architectures, new challenges, new databases related to
experimental prosody to contribute to the exchange of data, to the elaboration of parallel
corpora, to the proposition of common environments to do cross-linguistic research. The contributions should be sent through the journal website (www.journalofspeechsciences.org)
until August 15th, 2012. The primary language of the Journal is English. Contributions in
Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page
(between 500-600 words) abstract in English be given. The goal of this policy is to ensure a
wide dissemination of quality research written in these three Romance languages.
The contributions will be reviewed by at least two independent reviewers, though the
final decision as to publication is taken by the two editors. For preparing the manuscript,
please follow the instructions at the JoSS webpage. If accepted, the authors must use the
template given in the website for preparing the paper for publication.
Important Dates Submission deadline: August 15th, 2012* Notification of acceptance: September, 2012 Final manuscript due: November, 2012 Publication date: December, 2012 * If arrived after that date, the paper will follow the schedule for the next issue. About the JoSS The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles
of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read,
download, copy, distribute, print, search, or link to the full texts of any article electronically
published in the journal. It is accessible at <http://www.journalofspeechsciences.org>. The JoSS covers experimental aspects that deal with scientific aspects of speech, language
and linguistic communication processes. The JoSS is supported by the initiative of the
Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org>.
Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and
disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as
establishing a channel between sister associations abroad. Editors Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil) Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil) E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org
| ||
7-5 | Revue TAL DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES SECOND APPEL À CONTRIBUTIONS
| ||
7-6 | CfP ACM TiiS special issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots Call for Papers Special Issue of the ACM Transactions on Interactive Intelligent Systems on MACHINE LEARNING FOR MULTIPLE MODALITIES IN INTERACTIVE SYSTEMS AND ROBOTS Main submission deadline: February 28th, 2013 http://tiis.acm.org/special-issues.html AIMS AND SCOPE This special issue will highlight research that applies machine learning to robots and other systems that interact with users through more than one modality, such as speech, touch, gestures, and vision. Interactive systems such as multimodal interfaces, robots, and virtual agents often use some combination of these modalities to communicate meaningfully. For example, a robot may coordinate its speech with its actions, taking into account visual feedback during their execution. Alternatively, a multimodal system can adapt its input and output modalities to the user's goals, workload, and surroundings. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. This special issue aims to help fill this gap. The dimensions listed below indicate the range of work that is relevant to the special issue. Each article will normally represent one or more points on each of these dimensions. In case of doubt about the relevance of your topic, please contact the special issue associate editors. TOPIC DIMENSIONS System Types - Interactive robots - Embodied virtual characters - Avatars - Multimodal systems Machine Learning Paradigms - Reinforcement learning - Active learning - Supervised learning - Unsupervised learning - Any other learning paradigm Functions to Which Machine Learning Is Applied - Multimodal recognition and understanding in dialog with users - Multimodal generation to present information through several channels - Alignment of gestures with verbal output during interaction - Adaptation of system skills through interaction with human users - Any other functions, especially combining two or all of speech, touch, gestures, and vision SPECIAL ISSUE ASSOCIATE EDITORS - Heriberto Cuayahuitl, Heriot-Watt University, UK (contact: h.cuayahuitl[at]gmail[dot]com) - Lutz Frommberger, University of Bremen, Germany - Nina Dethlefs, Heriot-Watt University, UK - Antoine Raux, Honda Research Institute, USA - Matthew Marge, Carnegie Mellon University, USA - Hendrik Zender, Nuance Communications, Germany IMPORTANT DATES - By February 28th, 2013: Submission of manuscripts - By June 12th, 2013: Notification about decisions on initial submissions - By September 10th, 2013: Submission of revised manuscripts - By November 9th, 2013: Notification about decisions on revised manuscripts - By December 9th, 2013: Submission of manuscripts with final minor changes - Starting January, 2014: Publication of the special issue on the TiiS website, in the ACM Digital Library, and subsequently as a printed issue HOW TO SUBMIT Please see the instructions for authors on the TiiS website (tiis.acm.org). ABOUT ACM TiiS TiiS (pronounced 'T double-eye S'), launched in 2010, is an ACM journal for research about intelligent systems that people interact with.
| ||
7-7 | Foundations and Trends in Signal ProcessingFoundations and Trends in Signal Processing (www.nowpublishers.com/sig) has published the following issue: Volume 5, Issue 3 Multidimensional Filter Banks and Multiscale Geometric Representations By Minh N. Do (University of Illinois at Urbana-Champaign, USA) and Yue M. Lu (Harvard University, USA) http://dx.doi.org/10.1561/2000000012 The link will take you to the article abstract. If your library has a subscription, you will be able to download the PDF of the article. To purchase the book version of this issue, go to the secure Order Form: https://www.nowpublishers.com/bookorder.aspx?doi=2000000012&product=SIG You will pay the SIG member discount price of US$35/Euro 35 (plus shipping) by quoting the Promotion Code: SIG20012. Euro prices are valid in Europe only.
| ||
7-8 | Sciences et Voix Bonjour,
| ||
7-9 | CfP Journal of Speech Sciences (JoSS) Call for Papers Journal of Speech Sciences (JoSS) volume 3, number 1 (regular issue)
Indexed in Linguistics Abstracts and the DOAJ (Directory of Open-Access Journals) ISSN: 2236-9740 http://www.journalofspeechsciences.org
This is the Call for the fifth issue of the Journal of Speech Sciences (JoSS). JoSS covers experimental aspects dealing with scientific aspects of speech, language and linguistic communication processes. Coverage also includes articles dealing with pathological topics, or articles of an interdisciplinary nature, provided that experimental and linguistic principles underlie the work reported. Experimental approaches are emphasized in order to stimulate the development of new methodologies, of new annotated corpora, of new techniques aiming at fully testing current theories of speech production, perception, as well as phonetic and phonological theories and their interfaces. The contributions should be sent through the journal website (www.journalofspeechsciences.org) until January 30th, 2013. The primary language of the Journal is English. Contributions in Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page (between 500-600 words) abstract in English be given. The goal of this policy is to ensure a wide dissemination of quality research written in these three Romance languages. The contributions will be reviewed by at least two independent reviewers, though the final decision as regards publication will be taken by the editors. For preparing the manuscript, please follow the instructions at the JoSS webpage. If accepted, the authors must use the template given in the website for preparing the paper for publication. Important Dates Submission deadline: January 30th, 2013*Notification of acceptance: May, 2013Final manuscript due: June, 2013Publication date: July, 2013
* If arrived after that date, the paper will follow the schedule for the next issue.
Fourth issue titles and authors (to appear in December 2012)
Regular papers
Formal intonation analysis: applying INTSINT to Portuguese [In Portuguese] by Letícia Celeste and César Reis from the Federal University of Minas Gerais, Brazil. Changes in in TV news speaking style reading after journalistic broadcasting training [In Portuguese] by Ana C. Constantini from the State University of Campinas, Brazil. Analysis of the production of yes/no questions in Brazilian learners of Spanish as a foreign language [In Portuguese] by Eva C. O. Dias and Mariane A. Alves from the Federal University of Santa Catarina, Brazil. Experimental approach of the prosodic component in the linguistic input of garden-path sentences in Brazilian Portuguese [In Portuguese] by Aline A. Fonseca from the Federal University of of Minas Gerais, Brazil. Prosodic correlation between the focus particle ozik ‘only’ and focus/GIVENness in Korean byYong-cheol Lee from the University of Pennsylvania, United States.
Thematic papers
Extending Automatic Transcripts in a Unified Data Representation towards a Prosodic-based Metadata Annotation and Evaluation by Fernando Batista, Helena Moniz, Isabel Trancoso, Nuno Mamede and Ana Isabel Mata from INESC-Lisboa. Spontaneous Emotional speech in European Portuguese using Feeltrace system – fisrt approaches [In Portuguese] by Ana Nunes and António Teixeira from the University of Aveiro, Portugal. Open-Source Boundary-Annotated Qur’an Corpus for Arabic and Phrase Breaks Prediction in Classical and Modern Standard Arabic Text byMajdi Shaker Sawalha, Claire Brierley and Eric Atwell from the universities of Jordan (first author) and Leeds.
This issue acceptance rate: 73 %. Mean acceptance rate (four issues) : 52 %.
About the JoSS The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read, download, copy, distribute, print, search, or link to the full texts of any article electronically published in the journal. It is accessible at <http://www.journalofspeechsciences.org>. The JoSS covers experimental aspects that deal with scientific aspects of speech, language and linguistic communication processes. The JoSS is supported by the initiative of the Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org>. Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as establishing a channel between sister associations abroad.
Editors Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil) Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil)
E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org
| ||
7-10 | CfP EURASIP Journal Special Issue on Informed Acoustic Source Separation CALL FOR PAPERS
| ||
7-11 | EURASIP Journal on Advances in Signal Processing:*Special Issue on Informed Acoustic Source Separation* CALL FOR PAPERS
|