ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #178  »  Journals

ISCApad #178

Wednesday, April 10, 2013 by Chris Wellekens

7 Journals
7-1CfP ACM TiiS special issue on Machine Learning for Multiple Modalities in Interactive Systems and Robots
 
 
Call for Papers
Special Issue of the ACM Transactions on Interactive Intelligent Systems on MACHINE LEARNING FOR MULTIPLE MODALITIES IN INTERACTIVE SYSTEMS AND ROBOTS
Main submission deadline: February 28th, 2013
http://tiis.acm.org/special-issues.html
AIMS AND SCOPE
This special issue will highlight research that applies machine learning to robots and other systems that interact with users through more than one modality, such as speech, touch, gestures, and vision.
Interactive systems such as multimodal interfaces, robots, and virtual agents often use some combination of these modalities to communicate meaningfully. For example, a robot may coordinate its speech with its actions, taking into account visual feedback during their execution. Alternatively, a multimodal system can adapt its input and output modalities to the user's goals, workload, and surroundings. Machine learning provides interactive systems with opportunities to improve performance not only of individual components but also of the system as a whole. However, machine learning methods that encompass multiple modalities of an interactive system are still relatively hard to find. This special issue aims to help fill this gap.
The dimensions listed below indicate the range of work that is relevant to the special issue. Each article will normally represent one or more points on each of these dimensions. In case of doubt about the relevance of your topic, please contact the special issue associate editors.
TOPIC DIMENSIONS
System Types - Interactive robots - Embodied virtual characters - Avatars - Multimodal systems
Machine Learning Paradigms - Reinforcement learning - Active learning - Supervised learning - Unsupervised learning - Any other learning paradigm
Functions to Which Machine Learning Is Applied - Multimodal recognition and understanding in dialog with users - Multimodal generation to present information through several channels - Alignment of gestures with verbal output during interaction - Adaptation of system skills through interaction with human users - Any other functions, especially combining two or all of speech, touch, gestures, and vision
SPECIAL ISSUE ASSOCIATE EDITORS
- Heriberto Cuayahuitl, Heriot-Watt University, UK   (contact: h.cuayahuitl[at]gmail[dot]com) - Lutz Frommberger, University of Bremen, Germany - Nina Dethlefs, Heriot-Watt University, UK - Antoine Raux, Honda Research Institute, USA - Matthew Marge, Carnegie Mellon University, USA - Hendrik Zender, Nuance Communications, Germany
IMPORTANT DATES
- By February 28th, 2013: Submission of manuscripts - By June 12th, 2013: Notification about decisions on initial   submissions - By September 10th, 2013: Submission of revised manuscripts - By November 9th, 2013: Notification about decisions on revised   manuscripts - By December 9th, 2013: Submission of manuscripts with final   minor changes - Starting January, 2014: Publication of the special issue on the TiiS   website, in the ACM Digital Library, and subsequently as a   printed issue
HOW TO SUBMIT
Please see the instructions for authors on the TiiS website (tiis.acm.org).
ABOUT ACM TiiS
TiiS (pronounced 'T double-eye S'), launched in 2010, is an ACM journal for research about intelligent systems that people interact with.
Back  Top

7-2CfP EURASIP Journal Special Issue on Informed Acoustic Source Separation

CALL FOR PAPERS

EURASIP Journal on Advances in Signal Processing
*Special Issue on Informed Acoustic Source Separation*

The complete call of papers is accessible at:
http://asp.eurasipjournals.com/sites/10233/pdf/H9386_DF_CFP_EURASIP_JASP_A4_3.pdf

DEADLINE: PAPER SUBMISSION: 31st May 2013

Short Description

The proposed topic of this special issue is informed acoustic source separation. As source separation has long become a field of interest in the signal processing community, recent works increasingly point out the fact that separation can only be reliably achieved in real-world use cases when accurate prior information can be successfully incorporated. Informed separation algorithms can be characterized by the fact that case-specific prior knowledge is made available to the algorithm for processing. In this respect, they contrast with blind methods for which no specific prior information is available.

Following on the success of the special session on the same topic in EUSIPCO 2012 at Bucharest, we would like to present recent methods, discuss the trends and perspectives of this domain and to draw the attention of the signal processing community to this important problem and its potential applications. We are interested in both methodological advances and applications.  Topics of interest include (but are not limited to):

.    Sparse decomposition methods
.    Subspace learning methods for sparse decomposition
.    Non-negative matrix / tensor factorization
.    Robust principal component analysis
.    Probabilistic latent component analysis
.    Independent component analysis
.    Multidimensional component analysis
.    Multimodal source separation
.    Video-assisted source separation
.    Spatial audio object coding
.    Reverberant models for source separation
.    Score-informed source separation
.    Language-informed speech separation
.    User-guided source separation
.    Source separation informed by cover version
.    Informed source separation applied to speech, music or environmental signals
.    ...

Guest Editors
Taylan Cemgil, Bogazici University, Turkey,
Tuomas Virtanen, Tampere University of Technology, Finland,
Alexey Ozerov, Technicolor, France,
Derry Fitzgerald, Dublin institute of Technology, Ireland.

Lead Guest Editor:
Gaël Richard, Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI, France.

Back  Top

7-3EURASIP Journal on Advances in Signal Processing:*Special Issue on Informed Acoustic Source Separation*

CALL FOR PAPERS

EURASIP Journal on Advances in Signal Processing
*Special Issue on Informed Acoustic Source Separation*

The complete call of papers is accessible at:
http://asp.eurasipjournals.com/sites/10233/pdf/H9386_DF_CFP_EURASIP_JASP_A4_3.pdf

DEADLINE: PAPER SUBMISSION: 31st May 2013

Short Description

The proposed topic of this special issue is informed acoustic source separation. As source separation has long become a field of interest in the signal processing community, recent works increasingly point out the fact that separation can only be reliably achieved in real-world use cases when accurate prior information can be successfully incorporated. Informed separation algorithms can be characterized by the fact that case-specific prior knowledge is made available to the algorithm for processing. In this respect, they contrast with blind methods for which no specific prior information is available.

Following on the success of the special session on the same topic in EUSIPCO 2012 at Bucharest, we would like to present recent methods, discuss the trends and perspectives of this domain and to draw the attention of the signal processing community to this important problem and its potential applications. We are interested in both methodological advances and applications.  Topics of interest include (but are not limited to):

.    Sparse decomposition methods
.    Subspace learning methods for sparse decomposition
.    Non-negative matrix / tensor factorization
.    Robust principal component analysis
.    Probabilistic latent component analysis
.    Independent component analysis
.    Multidimensional component analysis
.    Multimodal source separation
.    Video-assisted source separation
.    Spatial audio object coding
.    Reverberant models for source separation
.    Score-informed source separation
.    Language-informed speech separation
.    User-guided source separation
.    Source separation informed by cover version
.    Informed source separation applied to speech, music or environmental signals
.    ...

Guest Editors
Taylan Cemgil, Bogazici University, Turkey,
Tuomas Virtanen, Tampere University of Technology, Finland,
Alexey Ozerov, Technicolor, France,
Derry Fitzgerald, Dublin institute of Technology, Ireland.

Lead Guest Editor:
Gaël Richard, Institut Mines-Télécom, Télécom ParisTech, CNRS-LTCI, France.

Back  Top

7-4CfP International Journal of Computational Linguistics and Chinese Language Processing: Special Issue on Processing Lexical Tones in Natural Speech

 

Call for Papers

International Journal of Computational Linguistics and Chinese Language Processing

Special Issue on Processing Lexical Tones in Natural Speech

 

This special issue aims to address questions about how lexical tones are processed by humans and machines in the context of natural, continuous speech. Lexical tones in tone languages have been widely investigated in the fields of linguistics, psycholinguistics, computational linguistics, and language acquisition by applying a wide range of theoretical, empirical, and experimental approaches. As the phonetic representation of lexical tones which are produced in connected speech can differ considerably from that of lexical tones which are produced in isolation, research interests constantly grow in how lexical tones are produced, perceived, and processed in realistic speech data. This special issue aims to bring together methodologies from different research disciplines to extend our understanding of lexical tones used in real speaking situations. We welcome submissions addressing the following issues.

 

  • Modeling lexical tones: Can lexical tones which are produced in natural speech be more accurately described and modeled by quantitative/gradient measures or by categorical systems? Is a hybrid approach possible? In what way can lexical tones be represented and analyzed by utilizing spoken corpora?

  • Human language processing: What role do lexical tones play in the mental lexicon? How are lexical tones produced and perceived by native and non-native language users?

  • Language acquisition: How are lexical tones acquired by typical developing children, hearing-impaired children, and second language learners? Are the phonological development patterns different from each other?

  • Speech technology: What kind of information about lexical tones can be integrated into ASR and synthesis systems to improve system performances?

  • Other research results related to lexical tones in natural speech are also welcome to contribute to this special issue.

 

Paper submission deadline: January 2, 2013 February 28, 2013

Notification of acceptance: May 31, 2013

Final paper due: August 31, 2013

Tentative publication date: December, 2013

All submitted papers should present original research work, which has not been published elsewhere. Submitted manuscripts will be peer-reviewed by at least two independent reviewers. For detailed submission guidelines, please visit the website of the International Journal of Computational Linguistics and Chinese Language Processingat http://www.aclclp.org.tw/journal/submit.php. Please also feel free to contact the Guest Editor of this special issue, Dr. Shu-Chuan Tseng, at tsengsc@gate.sinica.edu.tw, if you need any additional information.

 

              
   

Back  Top

7-5CfP Numéro spécial de la revue TAL sur 'les Entités Nommées et leurs relations'
PREMIER APPEL À CONTRIBUTIONS 
Numéro spécial de la revue TAL sur 'les Entités Nommées et leurs relations'  
 
Direction : Sophia Ananiadou (Sophia.ananiadou@manchester.ac.uk) Nathalie Friburger (nathalie.friburger@univ-tours.fr) Sophie Rosset (sophie.rosset@limsi.fr) Les entités nommées constituent un champ de recherche très actif depuis de nombreuses années. Elles sont depuis longtemps considérées comme un point central dans de nombreuses applications mettant en jeu des notions comme la compréhension, la recherche sémantique, etc. La notion d'Entités Nommées (EN) couvre non seulement les noms propres mais aussi des entités plus complexes comme les expressions multi-mots. Les entités nommées sont en général typées selon des taxonomies plus ou moins vastes et fortement dépendantes du domaine d'application ou des besoins considérés. Elles recouvrent classiquement des noms désignant des personnes, des lieux ou des organisations mais peuvent aussi se rapporter à des notions plus techniques comme les maladies. La détection des entités et de leurs relations, malgré de nombreuses années de recherche, reste un problème difficile. Les principaux points à résoudre incluent la résolution des ambiguïtés, la détection des synonymes, la co-référence et la variabilité (acronymes, orthographe etc.). Plusieurs méthodes ont été proposées et évaluées pour améliorer la détection et la classification des entités ainsi que leurs relations. Ces méthodes vont des approches fondées sur des connaissances explicites (approches à base de règles, de lexiques, etc.) aux approches à base d'apprentissage supervisé, faiblement supervisé voire non supervisé. L'évaluation des systèmes de détection des entités nécessite au minimum l'existence de corpus de référence (gold standard). L'évaluation des relations ajoute un niveau de complexité puisqu'il s'agit, le plus souvent, d'une tâche complexe impliquant la détection des entités puis des relations entre elles. Comment évaluer la détection des relations dans le cadre d'une tâche complète ? Quelle métrique utiliser pour tenir compte des erreurs de la tâche précédente (détection des EN) ? Si les entités nommées simples permettent d'atteindre de bons voire de très bons résultats, il n'en va pas de même lorsque leur définition est complexe ou lorsqu'on traite des domaines spécialisés. Nous invitons donc les contributions portant sur tout aspect relatif au traitement des entités nommées, et en particulier (liste non exclusive) : - définition et typologie des entités nommées, y compris dans un sens étendu - détection des entités nommées et type de documents (résumés, articles, ressources collaboratives comme Wikipedia, domaine de spécialité, media sociaux comme twitter, emails, file de discussion, parole...) - détection des empans et analyse structurelle des EN - co-référence inter-documents et suivi d'entités - suivi d'entités à travers le temps, les groupes sociaux et géographiques, suivi d'entités intra- et inter-documents, etc. - reconnaissance d'entités nommées dans le domaine général ou en domaine de spécialité - guides et schémas d'annotation, outils, méthodes et corpus annotés - aspects multilingues, extraction d'entités dans des corpus comparables ou parallèles - désambiguïsation des entités - applications et entités - évaluation, comparaison et validation d'outils LANGUE Les articles sont écrits en français ou en anglais. Les soumissions en anglais ne sont acceptées que pour les auteurs non francophones. LA REVUE Depuis 40 ans, TAL (Traitement Automatique des Langues) est un journal international publié par l'ATALA (Association pour le Traitement Automatique des Langues) avec le soutien du CNRS. Depuis quelques années, il s'agit d'un journal en ligne, des versions papier pouvant être obtenues sur commande. Ceci n'affecte en rien le processus de relecture et de sélection. DATES IMPORTANTES * 15 avril 2013 Deadline pour la soumission * Juillet 2013 Notification aux auteurs * Automne 2013 Publication FORMAT DE LA SOUMISSION Les articles soumis doivent décrire un travail original, complet et non publié. Chaque soumission sera étudiée par 2 membres du comité de programme. Les articles (25 pages environ, format PDF) doivent être déposés sur la plateforme Sciencesconf [adresse disponible très bientôt] Merci de contacter les éditeurs de cette revue si vous souhaitez y soumettre un article en leur fournissant un résumé d'une page. Sophia Ananiadou (Sophia.ananiadou@manchester.ac.uk) Nathalie Friburger (nathalie.friburger@univ-tours.fr) Sophie Rosset (sophie.rosset@limsi.fr) Les papiers acceptés feront au maximum 25 pages en PDF. Le style est disponible pour téléchargement sur le site du journal (http://www.atala.org/-Revue-TAL-) COMITE SCIENTIFIQUE (tentative) Maud Ehrmann, European Commission, JRC Olivier Galibert, LNE, France Natalia Grabar, STL, Université de Lille 1 et 3, France Kais Haddar, University of Sfax, Tunisie Thierry Hamon, LIM&Bio, Paris 13, France Sanda Harabagiu, Texas, USA Valia Kordoni, Humboldt-Universität, Berlin, Germany Anna Korhonen, University of Cambridge, UK Ioannis Korkontzelos, University of Manchester, UK Anne-Laure Ligozat, LIMSI, France Bernardo Magnini, FBK, HLT, Italy Makoto Miwa, University of Manchester, UK Claire Nedellec, MIG, INRA, France Aurélie Névéol, LIMSI, France Noaoaki Okazaki, Tohoku University, Japan Christian Raymond, IRISA, France Fabio Rinaldi, University of Zurich Patrick Ruch, University of Geneva, Swiss Benoit Sagot, ALPAGE, France Satoshi Sekine, NYU, USA Jian Su, A-STAR, Singapore Junichi Tsujii, Microsoft Research Asia, China Patrick Watrin, UCL, CENTAL, Belgique Fabio Zanzotto, Tor Vergata, university of Rome, Italy
 
--
 
 
 
Back  Top

7-6Special Issue of COMPUTER SPEECH AND LANGUAGE on Next Generation Paralinguistics
Call for Papers Special Issue of COMPUTER SPEECH AND LANGUAGE on Next Generation 
Paralinguistics __________________________________ 
http://www.journals.elsevier.com/computer-speech-and-language/call-for-papers/next-generation-computational-paralinguistics/ Computational Paralinguistics recently reached a level of maturity allowing for their first real-life 
applications in interaction, coaching, media retrieval, robotics, surveillance, and manifold further 
domains. In particular, an increasing level of realism is recently faced by coping with speaker 
independent analysis of highly naturalistic data in narrow-bandwidth, noisy, or reverberated 
conditions. At the same time, the richness of the range of speaker states and traits analysed 
computationally is increasingly widening up. This includes in particular also the degree of 
subjectivity faced with tasks such as perceived speaker personality, likability, or intelligibility,
 to name a few. Both these aspects require additional experience on the interplay of states 
and traits in speech, singing, and language. Further, with the integration in applications, 
novel aspects arise such as efficiency, reliability, self-learning, mobility, multi-cultural and 
multi-lingual aspects, handling groups of speaker or singers, standardisation, and user 
experience with such systems. This Special Issue thus aims at shaping the Next Generation Computational Paralinguistics. 
It will focus on technical issues for highly improved and reliable state and trait analysis in spoke
this topic. Original, previously unpublished submissions are encouraged within the following n,
 sung, and written language and provide a forum for some of the very best experimental work 
scope: +Analysis of States and Traits in Spoken, Sung, and Written Language +Subjectivity in Computational Paralinguistics (e.g., perceived states and traits) +Interdependence of States and Traits +Intelligibility of Language Varieties and Deviant Speech +Efficiency (low energy and memory consumption, fast adaptation, active learning, etc.) +Reliability (e.g., confidence measures, robustness against regulation and feigning, overlap) +Self-learning (unsupervised, partially supervised, reinforced, and deep learning) +Mobility (client/server distribution, package loss, coding artefacts, privacy preservation, etc.) +Multicultural and Multilingual Issues +Speaker / Singer Group Characterisation +Standardisation (output encoding, feature encoding, etc.) +Application (interaction, voice and writing coaching, retrieval, robotics, surveillance, etc.) +User Experience of Computational Paralinguistics Systems Important Dates __________________________________ Submission Deadline 1 April 2013 First Notification 1 July 2013 Final Version of Manuscripts 1 November 2013 Tentative Publication Date January 2014 Guest Editors __________________________________ Björn Schuller, Technische Universität München, Germany, schuller@IEEE.org Stefan Steidl, FAU, Germany, stefan.steidl@fau.de Anton Batliner, Technische Universität München, Germany, Anton.Batliner@lrz.uni-muenchen.de Alessandro Vinciarelli, University of Glasgow / IDIAP, UK, alessandro.vinciarelli@glasgow.ac.uk Felix Burkhardt, Deutsche Telekom AG, Germany, Felix.Burkhardt@telekom.de Rob van Son, University of Amsterdam / Netherlands Cancer Institute, NL , r.v.son@nki.nl
 
Submission Procedure __________________________________ Prospective authors should follow the regular guidelines of the Computer Speech and 
Language Journal for electronic submission (http://ees.elsevier.com/csl). During submission 
authors must select for this Special Issue (short name 'NextGen Paralinguistics'). 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA