ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #175  »  Journals

ISCApad #175

Thursday, January 10, 2013 by Chris Wellekens

7 Journals
7-1ACM Transactions on Speech and Language Processing/Special Issue on Multiword Expressions: from Theory to Practice and Use

ACM Transactions on Speech and Language Processing

  Special Issue on Multiword Expressions:

           from Theory to Practice and Use

                 multiword.sf.net/tslp2011si

Deadline for Submissions: May, 15th, 2012
--------------------------------------------------------------------------

Call for Papers

Multiword expressions (MWEs) range over linguistic constructions like
idioms (a frog in the throat, kill some time), fixed phrases (per se,
by and large, rock'n roll), noun compounds (traffic light, cable car),
compound verbs (draw a conclusion, go by [a name]), etc. While easily
mastered by native speakers, their interpretation poses a major challenge
for computational systems, due to their flexible and heterogeneous nature.
Surprisingly enough, MWEs are not nearly as frequent in NLP resources
(dictionaries, grammars) as they are in real-word text, where they have
been reported to account for half of the entries in the lexicon of a speaker
and over 70% of the terms in a domain. Thus, MWEs are a key issue and
a current weakness for tasks like natural language parsing and generation,
as well as real-life applications such as machine translation.

In spite of several proposals for MWE representation ranging along the
continuum from words-with-spaces to compositional approaches connecting
lexicon and grammar, to date, it remains unclear how MWEs should be
represented in electronic dictionaries, thesauri and grammars. New
methodologies that take into account the type of MWE and its properties
are needed for efficiently handling manually and/or automatically acquired
expressions in NLP systems. Moreover, we also need strategies to represent
deep attributes and semantic properties for these multiword entries. While
there is no unique definition or classification of MWEs, most researchers
agree on some major classes such as named entities, collocations, multiword
terminology and verbal expressions. These, though, are very heterogeneous
in terms of syntactic and semantic properties, and should thus be treated
differently by applications. Type-dependent analyses could shed some light
on the best methodologies to integrate MWE knowledge in our analysis and
generation systems.

Evaluation is also a crucial aspect for MWE research. Various evaluation
techniques have been proposed, from manual inspection of top-n candidates
to classic precision/recall measures. The use of tools and datasets freely
available on the MWE community website (multiword.sf.net/PHITE.php?sitesig=FILES)
is encouraged when evaluating MWE treatment. However, application-oriented
techniques are needed to give a clear indication of whether the acquired MWEs
are really useful. Research on the impact of MWE handling in applications such
as parsing, generation, information extraction, machine translation, summarization
can help to answer these questions.

We call for papers that present research on theoretical and practical aspects
of the computational treatment of MWEs, specifically focusing on MWEs in
applications such as machine translation, information retrieval and question
answering. We also strongly encourage submissions on processing MWEs in
the language of social media and micro-blogs. The focus of the special issue,
thus, includes, but is not limited to the following topics:

* MWE treatment in applications such as the ones mentioned above;
* Lexical representation of MWEs in dictionaries and grammars;
* Corpus-based identification and extraction of MWEs;
* Application-oriented evaluation of MWE treatment;
* Type-dependent analysis of MWEs;
* Multilingual applications (e.g. machine translation, bilingual dictionaries);
* Parsing and generation of MWEs, especially, processing of MWEs in the
 language of social media and micro-blogs;
* MWEs and user interaction;
* MWEs in linguistic theories like HPSG, LFG and minimalism and their
 contribution to applications;
* Relevance of research on first and second language acquisition of MWEs for
 applications;
* Crosslinguistic studies on MWEs.

Submission Procedure

Authors should follow the ACM TSLP manuscript preparation guidelines
described on the journal web site http://tslp.acm.org and submit an
electronic copy of their complete manuscript through the journal manuscript
submission site http://mc.manuscriptcentral.com/acm/tslp. Authors are required
to specify that their submission is intended for this special issue by including
on the first page of the manuscript and in the field 'Author's Cover Letter' the
note 'Submitted for the special issue on Multiword Expressions'.

Schedule

Submission deadline: May, 15th, 2012
Notification of acceptance: September, 15th , 2012
Final manuscript due: November, 31st, 2012

Program Committee

*      Iñaki Alegria, University of the Basque Country (Spain)
*      Dimitra Anastasiou, University of Bremen (Germany)
*      Eleftherios Avramidis, DFKI GmbH (Germany)
*      Timothy Baldwin, University of Melbourne (Australia)
*      Francis Bond, Nanyang Technological University  (Singapore)
*      Aoife Cahill, ETS (USA)
*      Helena Caseli, Federal University of Sao Carlos (Brazil)
*      Yu Tracy Chen, DFKI GmbH (Germany)
*      Paul Cook, University of Melbourne (Australia)
*      Ann Copestake, University of Cambridge (UK)
*      Béatrice Daille, Nantes University (France)
*      Gaël Dias, University of Caen Basse-Normandie (France)
*      Stefan Evert, University of Darmstadt (Germany)
*      Roxana Girju, University of Illinois at Urbana-Champaign (USA)
*      Chikara Hashimoto, National Institute of Information and Communications Technology (Japan)
*      Kyo Kageura, University of Tokyo (Japan)
*      Martin Kay, Stanford University and Saarland University (USA & Germany)
*      Su Nam Kim, University of Melbourne (Australia)
*      Dietrich Klakow, Saarland University (Germany)
*      Philipp Koehn, University of Edinburgh (UK)
*      Ioannis Korkontzelos, University of Manchester (UK)
*      Brigitte Krenn, Austrian Research Institute for Artificial Intelligence (Austria)
*      Evita Linardaki, Hellenic Open University (Greece)
*      Takuya Matsuzaki, Tsujii Lab, University of Tokyo (Japan)
*      Yusuke Miyao, Japan National Institute of Informatics (NII) (Japan)
*      Preslav Nakov , Qatar Foundation (Qatar)
*      Gertjan van Noord, University of Groningen (The Netherlands)
*      Diarmuid Ó Séaghdha, University of Cambridge (UK)
*      Jan Odijk, University of Utrecht (The Netherlands)
*      Pavel Pecina, Charles University (Czech Republic)
*      Scott Piao, Lancaster University (UK)
*      Thierry Poibeau, CNRS and École Normale Supérieure (France)
*      Maja Popovic,  DFKI GmbH  (Germany)
*      Ivan Sag, Stanford University (USA)
*      Agata Savary, Université François Rabelais Tours (France)
*      Violeta Seretan, University of Geneva (Switzerland)
*      Ekaterina Shutova, University of Cambridge (UK)
*      Joaquim Ferreira da Silva, New University of Lisbon (Portugal)
*      Lucia Specia, University of Wolverhampton (UK)
*      Sara Stymne, Linköping University (Sweden)
*      Stan Szpakowicz, University of Ottawa (Canada)
*      Beata Trawinski, University of Vienna (Austria)
*      Kyioko Uchiyama, National Institute of Informatics (Japan)
*      Ruben Urizar, University of the Basque Country (Spain)
*      Tony Veale, University College Dublin (Ireland)
*      David Vilar,  DFKI GmbH  (Germany)
*      Begoña Villada Moirón, RightNow  (The Netherlands)
*      Tom Wasow, Stanford University (USA)
*      Shuly Wintner,  University of Haifa (Israel)
*      Yi Zhang, DFKI GmbH and Saarland University (Germany)


Guest Editors

*      Valia Kordoni, DFKI GmbH and Saarland University (Germany)
*      Carlos Ramisch, University of Grenoble (France) and Federal University of Rio Grande do Sul (Brazil)
*      Aline Villavicencio, Federal University of Rio Grande do Sul (Brazil) and Massachusetts Institute of Technology (USA)


Contact

For any inquiries regarding the special issue, please send an email
to mweguesteditor@gmail.com

Back  Top

7-2CSL on Information Extraction and Retrieval from Spoken Documents

Call for Papers
Special Issue of
Computer Speech and Language
on
Information Extraction and Retrieval from Spoken Documents

In addition to increased storage space and connection bandwidths, advances in computing power resulted in extremely large amounts of multimedia data being available. In addition to text and metadata-based search, content-based retrieval is emerging as a means to provide easy access to multimedia, especially for data containing speech. Furthermore, as research in speech processing has matured, there has been a shift in research focus from speech recognition to its applications including retrieval of spoken content.

Over the last decade, significant advances have resulted from a combination of advances in automatic speech recognition (ASR), and information retrieval (IR). In addition, innovative methods for tighter coupling of the component technologies together with novel work in query-by-example have been crucial for robustness to errors in speech retrieval. Moving towards open vocabulary search is necessitated by the increased presence of many heterogeneous sources. This requires detection and recovery of out-of-vocabulary terms as a first step in the discovery of relations to known terms and topics in information extraction.

The main objective of this Special Issue is to bring together current advances in the field and provide a glimpse of the future of the field. Topics of interest include, but are not limited to:
        ? Novel techniques for coupling speech recognition and information retrieval
        ? Multilingual/cross-lingual
        ? Query by example techniques
        ? Relevance feedback
        ? Large heterogeneous archives
        ? Multimedia archives
        ? Voice queries
        ? Audio indexing
        ? Spoken Term Discovery
        ? Spoken Term Detection
        ? Open vocabulary search
        ? OOV detection and recovery
        ? Phonetic approaches
        ? Metadata
        ? Information extraction

Authors should follow the Elsevier Computer Speech and Language manuscript format described at the journal site. Prospective authors should submit an electronic copy of their complete manuscript through the journal Manuscript Tracking System at http://ees.elsevier.com/csl/ according to the following timetable:
Extended Submission Deadline:          August 1, 2012
First Round of Reviews:                November 15, 2012
Final Version of Manuscripts:        January 15, 2013
Target Publication Date:                May 1, 2013
During Submission authors must select ?Special Issue: Info Extraction & Retrieval?.

Guest Editors
Murat Saraçlar, Bo?aziçi University, Turkey, murat.saraclar@boun.edu.tr
Bhuvana Ramabhadran, IBM, USA, bhuvana@us.ibm.com
Ciprian Chelba, Google, USA, ciprianchelba@google.com

Back  Top

7-3IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications'
Call For Papers : IEEE Journal of Selected Topics in Signal Processing (JSTSP) Special Issue 
on 'Advances in Spoken Dialogue Systems and Mobile Interfaces: Theory and Applications' 
http://www.signalprocessingsociety.org/uploads/special_issues_deadlines/spoken_dialogue.pdf 
Recently, there have been an array of advances in both the theory and practice of spoken dialog 
systems, especially on mobile devices. On theoretical advances, foundational models and algorithms
 (e.g., Partially Observable Markov Decision Process (POMDP), reinforcement learning, 
Gaussian process models, etc.) have advanced the state-of-the-art on a number of fronts.
 For example, techniques have been presented which improve system robustness, enabling 
systems to achieve high performance even when faced with recognition inaccuracies. 
Other methods have been proposed for learning from interactions, to improve performance 
automatically. Still other methods have shown how systems can make use of speech input 
and output incrementally and in real time, raising levels of naturalness and responsiveness. 
On applications, interesting new results on spoken dialog systems are becoming available 
from both research settings and deployments 'in the wild', for example on deployed services 
such as Apple's Siri, Google's voice actions, Bing voice search, Nuance's Dragon Go!, 
and Vlingo. Speech input is now commonplace on smart phones, and is well-established 
as a convenient alternative to keyboard input, for tasks such as control of phone functionalities, 
dictation of messages, and web search. Recently, intelligent personal assistants have 
begun to appear, via both applications and features of the operating system. Many of these 
new assistants are much more than a straightforward keyboard replacement - they are 
first-class multi-modal dialogue systems that support sustained interactions, using spoken
 language, over multiple turns. New system architectures and engineering algorithms have 
also been investigated in research labs, which have led to more forward-looking spoken dialog 
systems. This special issue seeks to draw together advances in spoken dialogue systems from 
both research and industry. Submissions covering any aspect of spoken dialog systems are 
welcome. Specific (but not exhaustive) topics of interest include all of the following in relation 
to spoken dialogue systems and mobile interfaces: - theoretical foundations of spoken dialogue 
system design, learning, evaluation, and simulation - dialog tracking, including explicit 
representations of uncertainty in dialog systems, such as Bayesian networks; domain 
representation and detection - dialog control, including reinforcement learning, (PO)MDPs, 
decision theory, utility functions, and personalization for dialog systems - foundational 
technologies for dialog systems, including acoustic models, language models, language 
understanding, text-to-speech, and language generation; incremental approaches to input 
and output; usage of affect - applications, settings and practical evaluations, such as voice 
search, text message dictation, multi-modal interfaces, and usage while driving Papers must
be submitted online. The bulk of the issue will be original research papers, and priority will 
be given to the papers with high novelty and originality. Papers providing a 
tutorial/overview/survey are also welcome, although space is available for only a 
limited number of papers of this type. Tutorial/overview/survey papers will be evaluated 
on the basis of overall impact.
 -Manuscript submission: http://mc.manuscriptcentral.com/jstsp-ieee 
- Information for authors: http://www.signalprocessingsociety.org/publications/periodicals/jstsp/jstsp-author-info/ 
Dates Extended submission date of papers: July 22, 2012
 First review September 15, 2012 Revised Submission: October 10, 2012 
Second review: November 10, 2012 
Submission of final material: November 20, 2012 
Guest editors: - Kai Yu, co-lead guest editor (Shanghai Jiao Tong University) 
- Jason Williams, co-lead guest editor (Microsoft Research) 
- Brahim Chaib-draa (Laval University) 
- Oliver Lemon (Heriot-Watt University) 
- Roberto Pieraccini (ICSI) - Olivier Pietquin (SUPELEC) 
- Pascal Poupart (University of Waterloo) 
- Steve Young (University of Cambridge) 
Back  Top

7-4Journal of Speech Sciences (JoSS)
Call for Papers Journal of Speech Sciences (JoSS) ISSN: 2236-9740 <http://journalofspeechsciences.org/&gt; indexed in Linguistics Abstracts and Directory of Open Acess Journals Volume 2, number 2 (special issue) This is the CFP for the fourth issue of the Journal of Speech Sciences (JoSS). The JoSS covers 
experimental aspects that deal with scientific aspects of speech, language and linguistic 
communication processes. Coverage also includes articles dealing with pathological topics, or 
articles of an interdisciplinary nature, provided that experimental and linguistic principles 
underlie the work reported. Experimental approaches are emphasized in order to stimulate 
the development of new methodologies, of new annotated corpora, of new techniques aiming 
at fully testing current theories of speech production, perception, as well as phonetic and 
phonological theories and their interfaces. This issue the journal team will receive original, previously unpublished contributions on 
Corpora building for experimental prosody research or related themes. The purpose of this
 Special Issue is to present new architectures, new challenges, new databases related to 
experimental prosody to contribute to the exchange of data, to the elaboration of parallel 
corpora, to the proposition of common environments to do cross-linguistic research. The contributions should be sent through the journal website (www.journalofspeechsciences.org) 
until August 15th, 2012. The primary language of the Journal is English. Contributions in 
Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page 
(between 500-600 words) abstract in English be given. The goal of this policy is to ensure a 
wide dissemination of quality research written in these three Romance languages. 
The contributions will be reviewed by at least two independent reviewers, though the
 final decision as to publication is taken by the two editors. For preparing the manuscript, 
please follow the instructions at the JoSS webpage. If accepted, the authors must use the 
template given in the website for preparing the paper for publication. 
Important Dates Submission deadline: August 15th, 2012* Notification of acceptance: September, 2012 Final manuscript due: November, 2012 Publication date: December, 2012 * If arrived after that date, the paper will follow the schedule for the next issue. About the JoSS The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles 
of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read, 
download, copy, distribute, print, search, or link to the full texts of any article electronically 
published in the journal. It is accessible at <http://www.journalofspeechsciences.org&gt;. The JoSS covers experimental aspects that deal with scientific aspects of speech, language 
and linguistic communication processes. The JoSS is supported by the initiative of the 
Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org&gt;. 
Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and 
disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as 
establishing a channel between sister associations abroad. Editors Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil) Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil) E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org
Back  Top

7-5Revue TAL DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES

SECOND APPEL À CONTRIBUTIONS

DU BRUIT DANS LE SIGNAL : GESTION DES ERREURS EN TRAITEMENT AUTOMATIQUE DES LANGUES

UN NUMÉRO SPÉCIAL DE LA REVUE 'TRAITEMENT AUTOMATIQUE DES LANGUES' (TAL)

La langue que les applications de traitement automatique des langues
ont à traiter ressemble assez peu aux exemples parfaitement
grammaticaux que l'on rencontre dans les livres de grammaire. Dans l'usage
quotidien, les énoncés à traiter se présentent sous une forme
imparfaite : les textes dactylographiés contiennent des erreurs de
saisie, ainsi que de fautes d'orthographe et de grammaire ; les énoncés
oraux correspondent souvent à des phrases incomplètes et contiennent
des disfluences; les sorties des systèmes d'OCR contiennent de
multiples confusion entre caractères, et celles des systèmes de
reconnaissance vocale contiennent des transcriptions inexactes de ce
qui a réellement été prononcé.

Le bruit est donc inhérent au données langagières et ignorer cette
réalité ne peut que nuire à la qualité de nos systèmes de
traitement. Pour certaines applications, l'enjeu est de développer des
mécanismes robustes vis-à-vis de ces erreurs. Par exemple, un système
de dialogue pourra utiliser des mesures de confiance portant sur les
hypothèses de reconnaissance vocale pour décider s'il doit demander à
l'utilisateur de répéter. Pour d'autres applications, il sera
nécessaire de faire appel à des techniques de correction automatique
des erreurs; ainsi, par exemple, un système d'OCR pourra post-traiter
les textes avec des modèles de correction contextuels pour valider
l'orthographe des mots.

Ce numéro spécial vise à rassembler des contributions portant sur la
gestion des erreurs en traitement des langues. De nombreux
sous-domaines du TAL ont besoin de prendre en compte le bruit et les
erreurs dans les signaux linguistiques qu'ils considèrent, mais il est
rare que des chercheurs issus de ces diverses communautés aient
l'occasion de comparer leurs méthodes et leurs résultats. Notre
ambition est de mettre en perspective des travaux issus de ces
différents domaines de manière à encourager la fertilisation croisée
des idées.

Pour ce numéro spécial, nous considérons donc comme pertinent tout
travail touchant au traitement automatique de données bruitées. Les
sous-domaines les plus développés sont probablement la correction
orthographique, et, dans une moindre mesure, la correction
grammaticale; aucun de ces problèmes n'est pourtant complètement
résolu, et la situation est encore moins satisfaisante quand on
considère des erreurs plus profondes, touchant par exemple au style ou
à l'organisation du discours. Les traitements robustes, qui visent à
extraire le maximum d'informations utiles d'entrées potentiellement
erronées, seront aussi favorablement considérés, que ces entrées
se présentent sous forme écrite ou orale ; plus généralement, lesétudes
portant sur les stratégies de réparation d'erreur, par exemple dans les
systèmes de dialogue ou d'autres systèmes analogues, sont également
pertinentes pour ce numéro.

Nous invitons donc les contributions portant sur tout aspect relatif
au traitement des erreurs en TAL, et en particulier (liste non
exclusive):
* correction automatique de l'orthographe et de la grammaire
* erreurs sémantiques et logiques
* correction d'erreurs dans le style ou l'organisation du discours
* correction d'erreurs 'artificielles' (OCR, reconnaissance vocale, etc.)
* correction automatique de requêtes à des moteurs de recherche
* acquisition, annotation et analyse d'erreurs dans les textes réels
* corpus d'erreurs
* traitement des erreurs dans les langages contrôlés *****
* erreurs en apprentissage des langues
* erreurs de performance
* normalisation d'écrits non standards
* TAL robuste
* traitement de parole disfluente
* traitement des erreurs en reconnaissance vocale
* apprendre avec des données bruitées
* mesures de la gravité des erreurs
* mesures de confiance
* fouille et analyse d'erreurs
* auto-évaluation et diagnostic d'erreurs

ÉDITEURS INVITÉS
- Robert Dale (Macquarie University, Australia)
- François Yvon (LIMSI/CNRS and Univ. Paris Sud, France)

COMITÉ SCIENTIFIQUE
(TBA)

DATES IMPORTANTES
- soumission des contributions : 15 octobre 2012
- première notification aux auteurs : 15 décembre 2012
- date limite pour les versions révisées : 1er février 2013
- décisions finales : 15 avril 2013
- versions finales : 15 juin 2013
- publication : été 2013

LE JOURNAL

Depuis 40 ans, TAL (Traitement Automatique des Langues) est un journal
international publié par l'ATALA (Association pour le Traitement
Automatique des Langues) avec le soutien du CNRS. Depuis quelques
années, il s'agit d'un journal en ligne, des versions papier pouvant
être obtenues sur commande. Ceci n'affecte en rien le processus de
relecture et de sélection.

INFORMATIONS PRATIQUES

Les articles (25 pages environ, format PDF) doivent être déposés sur
la plateforme http://tal-53-3.sciencesconf.org/ Les feuilles de style
sont disponibles sur le site web du journal
(http://www.atala.org/-Revue-TAL).  Le journal ne publie que des
contributions originales, en français ou en anglais.

Back  Top

7-6CfP ACM TiiS special issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots
 
 
Call for Papers
Special Issue of the ACM Transactions on Interactive Intelligent Systems on MACHINE LEARNING FOR MULTIPLE MODALITIES IN INTERACTIVE SYSTEMS AND ROBOTS
Main submission deadline: February 28th, 2013
http://tiis.acm.org/special-issues.html
AIMS AND SCOPE
This special issue will highlight research that applies machine learning to robots and other systems that interact with users through more than one modality, such as speech, touch, gestures, and vision.
Interactive systems such as multimodal interfaces, robots, and virtual agents often use some combination of these modalities to communicate meaningfully. For example, a robot may coordinate its speech with its actions, taking into account visual feedback during their execution. Alternatively, a multimodal system can adapt its input and output modalities to the user's goals, workload, and surroundings. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. This special issue aims to help fill this gap.
The dimensions listed below indicate the range of work that is relevant to the special issue. Each article will normally represent one or more points on each of these dimensions. In case of doubt about the relevance of your topic, please contact the special issue associate editors.
TOPIC DIMENSIONS
System Types - Interactive robots - Embodied virtual characters - Avatars - Multimodal systems
Machine Learning Paradigms - Reinforcement learning - Active learning - Supervised learning - Unsupervised learning - Any other learning paradigm
Functions to Which Machine Learning Is Applied - Multimodal recognition and understanding in dialog with users - Multimodal generation to present information through several channels - Alignment of gestures with verbal output during interaction - Adaptation of system skills through interaction with human users - Any other functions, especially combining two or all of speech, touch, gestures, and vision
SPECIAL ISSUE ASSOCIATE EDITORS
- Heriberto Cuayahuitl, Heriot-Watt University, UK   (contact: h.cuayahuitl[at]gmail[dot]com) - Lutz Frommberger, University of Bremen, Germany - Nina Dethlefs, Heriot-Watt University, UK - Antoine Raux, Honda Research Institute, USA - Matthew Marge, Carnegie Mellon University, USA - Hendrik Zender, Nuance Communications, Germany
IMPORTANT DATES
- By February 28th, 2013: Submission of manuscripts - By June 12th, 2013: Notification about decisions on initial   submissions - By September 10th, 2013: Submission of revised manuscripts - By November 9th, 2013: Notification about decisions on revised   manuscripts - By December 9th, 2013: Submission of manuscripts with final   minor changes - Starting January, 2014: Publication of the special issue on the TiiS   website, in the ACM Digital Library, and subsequently as a   printed issue
HOW TO SUBMIT
Please see the instructions for authors on the TiiS website (tiis.acm.org).
ABOUT ACM TiiS
TiiS (pronounced 'T double-eye S'), launched in 2010, is an ACM journal for research about intelligent systems that people interact with.
Back  Top

7-7Foundations and Trends in Signal Processing
Foundations and Trends in Signal Processing (www.nowpublishers.com/sig) has published the following issue:   

Volume 5, Issue 3                                                                                                                                                        
Multidimensional Filter Banks and Multiscale Geometric Representations                          
By Minh N. Do (University of Illinois at Urbana-Champaign, USA) and Yue M. Lu (Harvard University, USA)                       
http://dx.doi.org/10.1561/2000000012                                                                                                        

The link will take you to the article abstract. If your library has a subscription, you will be able to download the PDF of the article.

To purchase the book version of this issue, go to the secure Order Form:  
https://www.nowpublishers.com/bookorder.aspx?doi=2000000012&product=SIG                                                                               
You will pay the SIG member discount price of US$35/Euro 35 (plus shipping) by quoting the Promotion Code: SIG20012.   
Euro prices are valid in Europe only.                                      
Back  Top

7-8Sciences et Voix

Bonjour,
   
    Je vous informe de l'ouverture d'un carnet de recherche 'Sciences et Voix', animé par la communauté scientifique sur la Voix en France: http://voix.hypotheses.org
    Vous y trouverez les informations actuelles sur les conférences, ateliers, les thèses, mémoires d'orthophonie et stages, des résumés d'articles, et toute autre information en lien avec les Sciences de la Voix.
   
    Vous pouvez vous abonner en ligne à ce carnet, en allant sur la page aux Editions Ouvertes (Open Edition) du Cléo et en     sélectionnant la plateforme 'Hypotheses.org' et le carnet 'Sciences  et Voix' dans la liste proposée: http://search.openedition.org/indexalert.php?a=addsubscription
    Cet abonnement gratuit vous permettra d'être tenu informé par email des publications du carnet.
   
    Je profite de ce message pour vous informer d'une série de cours sur la Voix donnés au Collège de France par Christine Petit dans le cadre de la chaire 'Génétique et Physiologie Cellulaire'. Toutes les informations sont sur le site du Collège de France ou sur le carnet:     http://voix.hypotheses.org/87
    Les cours auront lieu le Jeudi matin (13 décembre, 10 Janvier, 17 Janvier et 24 Janvier).
   
    Je rappelle également les rencontres mensuelles de l'Atelier   Sciences et Voix à GIPSA-lab, Grenoble.
    La prochaine rencontre a lieu ce Jeudi 29 Novembre, et elle sera  animée par Maëva Garnier sur le thème de l'effort vocal.
   
    Bien cordialement,
    Nathalie Henrich    

Back  Top

7-9CfP Journal of Speech Sciences (JoSS)

Call for Papers

Journal of Speech Sciences (JoSS)

volume 3, number 1 (regular issue)

 

Indexed in Linguistics Abstracts and the DOAJ (Directory of Open-Access Journals)

ISSN: 2236-9740

http://www.journalofspeechsciences.org

 

This is the Call for the fifth issue of the Journal of Speech Sciences (JoSS). JoSS covers experimental aspects dealing with scientific aspects of speech, language and linguistic communication processes. Coverage also includes articles dealing with pathological topics, or articles of an interdisciplinary nature, provided that experimental and linguistic principles underlie the work reported. Experimental approaches are emphasized in order to stimulate the development of new methodologies, of new annotated corpora, of new techniques aiming at fully testing current theories of speech production, perception, as well as phonetic and phonological theories and their interfaces.

The contributions should be sent through the journal website (www.journalofspeechsciences.orguntil January 30th, 2013. The primary language of the Journal is English. Contributions in Portuguese, in Spanish (Castillan) and in French are also accepted, provided a 1-page (between 500-600 words) abstract in English be given. The goal of this policy is to ensure a wide dissemination of quality research written in these three Romance languages. The contributions will be reviewed by at least two independent reviewers, though the final decision as regards publication will be taken by the editors.

For preparing the manuscript, please follow the instructions at the JoSS webpage. If accepted, the authors must use the template given in the website for preparing the paper for publication.

Important Dates

Submission deadline: January 30th, 2013*Notification of acceptance: May, 2013Final manuscript due: June, 2013Publication date: July, 2013

 

* If arrived after that date, the paper will follow the schedule for the next issue.

 

Fourth issue titles and authors (to appear in December 2012)

 

Regular papers

 

Formal intonation analysis: applying INTSINT to Portuguese [In Portuguese] by Letícia Celeste and César Reis from the Federal University of Minas Gerais, Brazil.

Changes in in TV news speaking style reading after journalistic broadcasting training [In Portuguese] by Ana C. Constantini from the State University of Campinas, Brazil.

Analysis of the production of yes/no questions in Brazilian learners of Spanish as a foreign language [In Portuguese] by Eva C. O. Dias and Mariane A. Alves from the Federal University of Santa Catarina, Brazil.

Experimental approach of the prosodic component in the linguistic input of garden-path sentences in Brazilian Portuguese [In Portuguese] by Aline A. Fonseca from the Federal University of of Minas Gerais, Brazil.

Prosodic correlation between the focus particle ozik ‘only’ and focus/GIVENness in Korean byYong-cheol Lee from the University of Pennsylvania, United States.

 

Thematic papers

 

Extending Automatic Transcripts in a Unified Data Representation towards a Prosodic-based Metadata Annotation and Evaluation by Fernando Batista, Helena Moniz, Isabel Trancoso, Nuno Mamede and Ana Isabel Mata from INESC-Lisboa.

Spontaneous Emotional speech in European Portuguese using Feeltrace system – fisrt approaches [In Portuguese] by Ana Nunes and António Teixeira from the University of Aveiro, Portugal.

Open-Source Boundary-Annotated Qur’an Corpus for Arabic and Phrase Breaks Prediction in Classical and Modern Standard Arabic Text byMajdi Shaker Sawalha, Claire Brierley and Eric Atwell from the universities of Jordan (first author) and Leeds.

 

This issue acceptance rate: 73 %. Mean acceptance rate (four issues) : 52 %.

 

About the JoSS

The Journal of Speech Sciences (JoSS) is an open access journal which follows the principles of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read, download, copy, distribute, print, search, or link to the full texts of any article electronically published in the journal. It is accessible at <http://www.journalofspeechsciences.org>.

The JoSS covers experimental aspects that deal with scientific aspects of speech, language and linguistic communication processes. The JoSS is supported by the initiative of the Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org>. Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as establishing a channel between sister associations abroad.

 

Editors

Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil)

Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil)

 

E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org

 

Back  Top

7-10CfP EURASIP Journal Special Issue on Informed Acoustic Source Separation

CALL FOR PAPERS

EURASIP Journal on Advances in Signal Processing
*Special Issue on Informed Acoustic Source Separation*

The complete call of papers is accessible at:
http://asp.eurasipjournals.com/sites/10233/pdf/H9386_DF_CFP_EURASIP_JASP_A4_3.pdf

DEADLINE: PAPER SUBMISSION: 31st May 2013

Short Description

The proposed topic of this special issue is informed acoustic source separation. As source separation has long become a field of interest in the signal processing community, recent works increasingly point out the fact that separation can only be reliably achieved in real-world use cases when accurate prior information can be successfully incorporated. Informed separation algorithms can be characterized by the fact that case-specific prior knowledge is made available to the algorithm for processing. In this respect, they contrast with blind methods for which no specific prior information is available.

Following on the success of the special session on the same topic in EUSIPCO 2012 at Bucharest, we would like to present recent methods, discuss the trends and perspectives of this domain and to draw the attention of the signal processing community to this important problem and its potential applications. We are interested in both methodological advances and applications.  Topics of interest include (but are not limited to):

.    Sparse decomposition methods
.    Subspace learning methods for sparse decomposition
.    Non-negative matrix / tensor factorization
.    Robust principal component analysis
.    Probabilistic latent component analysis
.    Independent component analysis
.    Multidimensional component analysis
.    Multimodal source separation
.    Video-assisted source separation
.    Spatial audio object coding
.    Reverberant models for source separation
.    Score-informed source separation
.    Language-informed speech separation
.    User-guided source separation
.    Source separation informed by cover version
.    Informed source separation applied to speech, music or environmental signals
.    ...

Guest Editors
Taylan Cemgil, Bogazici University, Turkey,
Tuomas Virtanen, Tampere University of Technology, Finland,
Alexey Ozerov, Technicolor, France,
Derry Fitzgerald, Dublin institute of Technology, Ireland.

Lead Guest Editor:
Gaël Richard, Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI, France.

Back  Top

7-11EURASIP Journal on Advances in Signal Processing:*Special Issue on Informed Acoustic Source Separation*

CALL FOR PAPERS

EURASIP Journal on Advances in Signal Processing
*Special Issue on Informed Acoustic Source Separation*

The complete call of papers is accessible at:
http://asp.eurasipjournals.com/sites/10233/pdf/H9386_DF_CFP_EURASIP_JASP_A4_3.pdf

DEADLINE: PAPER SUBMISSION: 31st May 2013

Short Description

The proposed topic of this special issue is informed acoustic source separation. As source separation has long become a field of interest in the signal processing community, recent works increasingly point out the fact that separation can only be reliably achieved in real-world use cases when accurate prior information can be successfully incorporated. Informed separation algorithms can be characterized by the fact that case-specific prior knowledge is made available to the algorithm for processing. In this respect, they contrast with blind methods for which no specific prior information is available.

Following on the success of the special session on the same topic in EUSIPCO 2012 at Bucharest, we would like to present recent methods, discuss the trends and perspectives of this domain and to draw the attention of the signal processing community to this important problem and its potential applications. We are interested in both methodological advances and applications.  Topics of interest include (but are not limited to):

.    Sparse decomposition methods
.    Subspace learning methods for sparse decomposition
.    Non-negative matrix / tensor factorization
.    Robust principal component analysis
.    Probabilistic latent component analysis
.    Independent component analysis
.    Multidimensional component analysis
.    Multimodal source separation
.    Video-assisted source separation
.    Spatial audio object coding
.    Reverberant models for source separation
.    Score-informed source separation
.    Language-informed speech separation
.    User-guided source separation
.    Source separation informed by cover version
.    Informed source separation applied to speech, music or environmental signals
.    ...

Guest Editors
Taylan Cemgil, Bogazici University, Turkey,
Tuomas Virtanen, Tampere University of Technology, Finland,
Alexey Ozerov, Technicolor, France,
Derry Fitzgerald, Dublin institute of Technology, Ireland.

Lead Guest Editor:
Gaël Richard, Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI, France.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA