ISCA - International Speech
Communication Association


ISCApad Archive  »  2011  »  ISCApad #154  »  Journals

ISCApad #154

Friday, April 22, 2011 by Chris Wellekens

7 Journals
7-1Special Issue on Deep Learning for Speech and Language Processing, IEEE Trans. ASLT

IEEE Transactions on Audio, Speech, and Language Processing
IEEE Signal Processing Society
Special Issue on Deep Learning for Speech and Language Processing
Over the past 25 years or so, speech recognition technology has been dominated largely by hidden Markov models (HMMs). Significant technological success has been achieved using complex and carefully engineered variants of HMMs. Next generation technologies require solutions to technical challenges presented by diversified deployment environments. These challenges arise from the many types of variability present in the speech signal itself. Overcoming these challenges is likely to require “deep” architectures with efficient and effective learning algorithms. There are three main characteristics in the deep learning paradigm: 1) layered architecture; 2) generative modeling at the lower layer(s); and 3) unsupervised learning at the lower layer(s) in general. For speech and language processing and related sequential pattern recognition applications, some attempts have been made in the past to develop layered computational architectures that are “deeper” than conventional HMMs, such as hierarchical HMMs, hierarchical point-process models, hidden dynamic models, layered multilayer perceptron, tandem-architecture neural-net feature extraction, multi-level detection-based architectures, deep belief networks, hierarchical conditional random field, and deep-structured conditional random field. While positive recognition results have been reported, there has been a conspicuous lack of systematic learning techniques and theoretical guidance to facilitate the development of these deep architectures. Recent communication between machine learning researchers and speech and language processing researchers revealed a wealth of research results pertaining to insightful applications of deep learning to some classical speech recognition and language processing problems. These results can potentially further advance the state of the arts in speech and language processing.
In light of the sufficient research activities in this exciting space already taken place and their importance, we invite papers describing various aspects of deep learning and related techniques/architectures as well as their successful applications to speech and language processing. Submissions must not have been previously published, with the exception that substantial extensions of conference or workshop papers will be considered.
The submissions must have specific connection to audio, speech, and/or language processing. The topics of particular interest will include, but are not limited to:
 Generative models and discriminative statistical or neural models with deep structure
 Supervised, semi-supervised, and unsupervised learning with deep structure
 Representing sequential patterns in statistical or neural models
 Robustness issues in deep learning
 Scalability issues in deep learning
 Optimization techniques in deep learning
 Deep learning of relationships between the linguistic hierarchy and data-driven speech units
 Deep learning models and techniques in applications such as (but not limited to) isolated or continuous speech recognition, phonetic recognition, music signal processing, language modeling, and language identification.
The authors are required to follow the Author’s Guide for manuscript submission to the IEEE Transactions on Audio, Speech, and Language Processing at
http://www.signalprocessingsociety.org/publications/periodicals/taslp/taslp-author-information
Submission deadline: September 15, 2010
Notification of Acceptance: March 15, 2011
Final manuscripts due: May 15, 2011
Date of publication: August 2011
For further information, please contact the guest editors:
Dong Yu

Top

7-2ACM TSLP Special issue:“Machine Learning for Robust and Adaptive Spoken Dialogue Systems'

ACM TSLP - Special Issue: call for Papers:
“Machine Learning for Robust and Adaptive Spoken Dialogue Systems'

* Submission Deadline 1 July 2010 *
http://tslp.acm.org/specialissues.html

During the last decade, research in the field of Spoken Dialogue
Systems (SDS) has experienced increasing growth, and new applications
include interactive search, tutoring and “troubleshooting” systems,
games, and health agents. The design and optimization of such SDS
requires the development of dialogue strategies which can robustly
handle uncertainty, and which can automatically adapt to different
types of users (novice/expert, youth/senior) and noise conditions
(room/street). New statistical learning techniques are also emerging
for training and optimizing speech recognition, parsing / language
understanding, generation, and synthesis for robust and adaptive
spoken dialogue systems.

Automatic learning of adaptive, optimal dialogue strategies is
currently a leading domain of research. Among machine learning
techniques for spoken dialogue strategy optimization, reinforcement
learning using Markov Decision Processes (MDPs) and Partially
Observable MDPs (POMDPs) has become a particular focus.
One concern for such approaches is the development of appropriate
dialogue corpora for training and testing. However, the small amount
of data generally available for learning and testing dialogue
strategies does not contain enough information to explore the whole
space of dialogue states (and of strategies). Therefore dialogue
simulation is most often required to expand existing datasets and
man-machine spoken dialogue stochastic modelling and simulation has
become a research field in its own right. User simulations for
different types of user are a particular new focus of interest.

Specific topics of interest include, but are not limited to:

 • Robust and adaptive dialogue strategies
 • User simulation techniques for robust and adaptive strategy
learning and testing
 • Rapid adaptation methods
 • Modelling uncertainty about user goals
 • Modelling user’s goal evolution along time
 • Partially Observable MDPs in dialogue strategy optimization
 • Methods for cross-domain optimization of dialogue strategies
 • Statistical spoken language understanding in dialogue systems
 • Machine learning and context-sensitive speech recognition
 • Learning for adaptive Natural Language Generation in dialogue
 • Machine learning for adaptive speech synthesis (emphasis, prosody, etc.)
 • Corpora and annotation for machine learning approaches to SDS
 • Approaches to generalising limited corpus data to build user models
and user simulations
 • Evaluation of adaptivity and robustness in statistical approaches
to SDS and user simulation.

Submission Procedure:
Authors should follow the ACM TSLP manuscript preparation guidelines
described on the journal web site http://tslp.acm.org and submit an
electronic copy of their complete manuscript through the journal
manuscript submission site http://mc.manuscriptcentral.com/acm/tslp.
Authors are required to specify that their submission is intended for
this Special Issue by including on the first page of the manuscript
and in the field “Author’s Cover Letter” the note “Submitted for the
Special Issue of Speech and Language Processing on Machine Learning
for Robust and Adaptive Spoken Dialogue Systems”. Without this
indication, your submission cannot be considered for this Special
Issue.

Schedule:
• Submission deadline : 1 July 2010
• Notification of acceptance: 1 October 2010
• Final manuscript due: 15th November 2010

Guest Editors:
Oliver Lemon, Heriot-Watt University, Interaction Lab, School of
Mathematics and Computer Science, Edinburgh, UK.
Olivier Pietquin, Ecole Supérieure d’Électricité (Supelec), Metz, France.

http://tslp.acm.org/cfp/acmtslp-cfp2010-02.pdf

 

Top

7-3IEEE TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING Special Issue on New Frontiers in Rich Transcription
IEEE  TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING
Special Issue on New Frontiers in Rich Transcription

A rich transcript is a transcript of a recorded event along with
metadata to enrich the word stream with useful information such as
identifying speakers, sentence units, proper nouns, speaker locations,
etc. As the volume of online media increases and additional, layered
content extraction technologies are built, rich transcription has
become a critical foundation for delivering extracted content to
down-stream applications such as spoken document retrieval,
summarization, semantic navigation, speech data mining, and others.

The special issue on 'New Frontiers in Rich Transcription' will focus
on the recent research on technologies that generate rich
transcriptions automatically and on its applications. The field of
rich transcription draws on expertise from a variety of disciplines
including: (a) signal acquistion (recording room design, microphone
and camera design, sensor synchronization, etc.), (b) automatic
content extraction and supporting technologies (signal processing,
room acoustics compensation, spatial and multichannel audio
processing, robust speech recognition, speaker
recognition/diarization/tracking, spoken language understanding,
speech recognition, multimodal information integration from audio and
video sensors,  etc.), (c) corpora infrastructure (meta-data
standards, annotations  procedures, etc.), and (d) performance
benchmarking (ground truthing, evaluation metrics, etc.) In the end,
rich transcriptions serve as enabler of a variety of spoken document
applications.

Many large international projects (e.g. the NIST RT evaluations) have
been active in the area of rich transcription, engaging in efforts of
extracting useful content from a range of media such as broadcast
news, conversational telephone speech, multi-party meeting recordings,
lecture recordings. The current special issue aims to be one of the
first in bringing together the enabling technologies that are critical
in rich transcription of media with a large variety of speaker styles,
spoken content and acoustic environments. This area has also led to
new research directions recently, such as multimodal signal processing
or automatic human behavior modeling.

The purpose of this special issue is to present overview papers,
recent advances in Rich Transcription research as well as new ideas
for the direction of the field.  We encourage submissions about the
following and other related topics:
  * Robust Automatic Speech Recognition for Rich Transcription
  * Speaker Diarization and Localization
  * Speaker-attributed-Speech-to-Text
  * Data collection and Annotation
  * Benchmarking Metrology for Rich Transcription
  * Natural language processing for Rich Transcription
  * Multimodal Processing for Rich Transcription
  * Online Methods for Rich Transcription
  * Future Trends in Rich Transcription

Submissions must not have been previously published, with the
exception that substantial extensions of conference papers are
considered.

Submissions must be made through IEEE's manuscript central at:
http://mc.manuscriptcentral.com/sps-ieee
Selecting the special issue as target.

Important Dates:
EXTENDED Submission deadline: 1 September 2010
Notification of acceptance: 1 January 2011
Final manuscript due:  1 July 2011

For further information, please contact the guest editors:
Gerald Friedland, fractor@icsi.berkeley.edu
Jonathan Fiscus, jfiscus@nist.gov
Thomas Hain, T.Hain@dcs.shef.ac.uk
Sadaoki Furui, furui@cs.titech.ac.jp
Top

7-4IEEE Transactions on Audio, Speech, and Language Processing/Special Issue on Deep Learning for Speech and Language Processing

IEEE Transactions on Audio, Speech, and Language Processing
Special Issue on Deep Learning for Speech and Language Processing

Over the past 25 years or so, speech recognition
technology has been dominated largely by hidden Markov
models (HMMs). Significant technological success has been
achieved using complex and carefully engineered variants
of HMMs. Next generation technologies require solutions to
technical challenges presented by diversified deployment
environments. These challenges arise from the many types
of variability present in the speech signal itself.
Overcoming these challenges is likely to require “deep”
architectures with efficient and effective learning
algorithms.

There are three main characteristics in the deep learning
paradigm: 1) layered architecture; 2) generative modeling
at the lower layer(s); and 3) unsupervised learning at the
lower layer(s) in general. For speech and language
processing and related sequential pattern recognition
applications, some attempts have been made in the past to
develop layered computational architectures that are
“deeper” than conventional HMMs, such as hierarchical HMMs,
 hierarchical point-process models, hidden dynamic models,
layered multilayer perception, tandem-architecture
neural-net feature extraction, multi-level detection-based
architectures, deep belief networks, hierarchical
conditional random field, and deep-structured conditional
random field. While positive recognition results have been
reported, there has been a conspicuous lack of systematic
learning techniques and theoretical guidance to facilitate
the development of these deep architectures. Recent
communication between machine learning researchers and
speech and language processing researchers revealed a
wealth of research results pertaining to insightful
applications of deep learning to some classical speech
recognition and language processing problems. These
results can potentially further advance the state of the
arts in speech and language processing.

In light of the sufficient research activities in this
exciting space already taken place and their importance,
we invite papers describing various aspects of deep
learning and related techniques/architectures as well as
their successful applications to speech and language
processing. Submissions must not have been previously
published, with the exception that substantial extensions
of conference or workshop papers will be considered.

The submissions must have specific connection to audio,
speech, and/or language processing. The topics of
particular interest will include, but are not limited to:

  • Generative models and discriminative statistical or neural models with deep structure
  • Supervised, semi-supervised, and unsupervised learning with deep structure
  • Representing sequential patterns in statistical or neural models
  • Robustness issues in deep learning
  • Scalability issues in deep learning
  • Optimization techniques in deep learning
  • Deep learning of relationships between the linguistic hierarchy and data-driven speech units
  • Deep learning models and techniques in applications such as (but not limited to) isolated or continuous speech recognition, phonetic recognition, music signal processing, language modeling, and language identification.

The authors are required to follow the Author’s Guide for
manuscript submission to the IEEE Transactions on Audio,
Speech, and Language Processing at
http://www.signalprocessingsociety.org/publications/
periodicals/taslp/taslp-author-information

 
Submission deadline: September 15, 2010                 
Notification of Acceptance: March 15, 2011
Final manuscripts due: May 15, 2011
Date of publication: August 2011

For further information, please contact the guest editors:
Dong Yu (dongyu@microsoft.com)
Geoffrey Hinton (hinton@cs.toronto.edu)
Nelson Morgan (morgan@ICSI.Berkeley.edu)
Jen-Tzung Chien (jtchien@mail.ncku.edu.tw)
Shiegeki Sagayama (sagayama@hil.t.u-tokyo.ac.jp)

Top

7-5SPECIAL ISSUE on Multimedia Semantic Computing


SPECIAL ISSUE ON Multimedia Semantic Computing

International Journal of Multimedia Data Engineering and Management (IJMDEM)

Guest Editors:
Chengcui Zhang, The University of Alabama at Birmingham, USA
Hongli Luo, Indiana University - Purdue University Fort Wayne, USA
Min Chen, University of Montana, USA

INTRODUCTION:

Recent advances in storage, hardware, information technology, communication and 
networking have resulted in a large amount of multimedia data, such as image, 
audio and video. Multimedia data is used in a wide range of applications of 
entertainment, digital libraries, health care, and distant learning. Efficient 
methods to extract knowledge from multimedia content have attracted the 
interests of a large research community. The volume of multimedia data presents 
great challenges in data analysis, indexing, retrieval, distribution and 
management. The increasing popularity of mobile device, such as smart phones, 
PDA, has demands for pervasive multimedia service which can adapt video 
contents to match users preferences. New research trends such as semantic 
computing, multi-modality interaction and cross-mining, cooperative processing, 
new multimedia standard (e.g., MPEG-21), Quality of Service (QoS), and security 
social networking provide both challenges and opportunities for multimedia 
research.

Semantic computing can be defined as Computing with (Machine processable) 
Description of Content and Intentions. Semantic technique can be used to 
extract the content of multimedia, texts, services, and structured data. The 
scope of semantic computing is now extended to many domains and applications 
such as information retrieval, knowledge management, natural language 
processing and data mining. Semantic can enrich the methods of multimedia 
research and brings new research area in multimedia semantic computing.

Multimedia semantic computing is the application of semantic computing to 
multimedia data. Researches in multimedia application can exploit the richness 
of semantic to enhance the performances of services. The understanding of 
multimedia data at the semantic level can bring new research methods to various 
domains of multimedia applications such as signal processing, content-based 
retrieval, and data mining. The application of semantic computing to multimedia 
is further motivated by the human-computer interaction in multimedia 
management. The convergence of semantic and multimedia facilitates the 
management, use and understanding of data in traditional multimedia 
applications such as multimedia analysis, indexing, annotation, retrieval, 
delivery and management.

While multimedia semantic computing changes the multimedia application and 
service via semantic concepts, several issues related to semantic modeling and 
visual analysis still remain challenging. One of the major challenging research 
topics is how to bridge the gap between low-level features of multimedia data 
and the high-level meanings of multimedia contents. Low-level features such as 
color, texture and shape can be automatically extracted from the multimedia 
data. How to model the semantic concepts based on the low-level feature is the 
prerequisite for multimedia analysis and indexing. The high-level semantic 
concepts of multimedia data can facilitate content-based applications, for 
example, image/video classification, retrieval of image and event of particular 
interest in the video. Semantic features can also be integrated with other 
techniques, such as user relevance feedback, annotations, multimodalities and 
metadata management, to efficiently and effectively retrieve the multimedia 
data.

The emerging pervasive multimedia computing applications deliver and present 
data to mobile users. Rich semantic of multimedia data can be exploited in 
context-aware computing application to provide personalized services. The 
provision of the data is envisioned to be adaptive to the location, time, 
devices and user preferences. Challenges related to this issue include the 
representation of context of multimedia data using semantic concepts, semantic 
user interface, and personalization for interface with multimedia data.

OBJECTIVE OF THE SPECIAL ISSUE:

The goal of this special issue is to show some of the current research in the 
area of multimedia semantic computing. To this aim, we will bring together a 
number of high-quality and relevant papers that discuss the state-of-the-art 
research on semantic-based multimedia systems, present theoretic framework and 
practical implementations, and identify challenges and open issues in 
multimedia semantic modeling and integration. The special issue calls for 
research in various domains and applications, including pervasive multimedia 
computing systems, personalized multimedia information retrieval systems which 
adapt to the user's needs, integration of semantic content and schema from 
distributed multimedia sources so that the user sees a unified view of 
heterogeneous data, clustering and classification of semantically tied 
information in different multimedia modalities, security issues in 
multimedia/hypertext systems, and so forth. We hope this special issue will 
show a broad picture of the emerging semantic multimedia research.

RECOMMENDED TOPICS:

Topics to be discussed in this special issue include (but are not limited to) 
the following:

 	 Distributed multimedia systems
 	 Human-centered multimedia computing
 	 Image/video/audio databases
 	 Mobile multimedia computing
 	 Multimedia assurance and security
 	 Multimedia data mining
 	 Multimedia data modeling
 	 Multimedia data storage
 	 Multimedia data visualization
 	 Multimedia retrieval (image, video, audio, etc.)
 	 Multimedia streaming and networking
 	 Multimodal data analysis and interaction
 	 Novel applications, such as biomedical multimedia computing and 
multimedia forensics
 	 Semantic content analysis
 	 Semantic integration and meta-modeling
 	 Social network analysis from a multimedia perspective

SUBMISSION PROCEDURE:
Researchers and practitioners are invited to submit papers for this special 
theme issue on Multimedia Semantic Computing on or before October 18, 2010. All 
submissions must be original and may not be under review by another 
publication. INTERESTED AUTHORS SHOULD CONSULT THE JOURNALS GUIDELINES FOR 
MANUSCRIPT SUBMISSIONS at 
http://www.igi-global.com/development/author_info/guidelines submission.pdf. 
All submitted papers will be reviewed on a double-blind, peer review basis. 
Papers must follow APA style for reference citations.

ABOUT International Journal of Multimedia Data Engineering and Management 
(IJMDEM):

The primary objective of the International Journal of Multimedia Data 
Engineering and Management (IJMDEM) is to promote and advance multimedia 
research from different aspects in multimedia data engineering and management. 
It provides a forum for university researchers, scientists, industry 
professionals, software engineers and graduate students who need to be become 
acquainted with new theories, algorithms, and technologies in multimedia 
engineering, and to all those who wish to gain a detailed technical 
understanding of what multimedia engineering involves. Novel and fundamental 
theories, algorithms, technologies, and applications will be published to 
support this mission.

This journal is an official publication of the Information Resources Management 
Association
www.igi-global.com/ijmdem

Editor-in-Chief: Shu-Ching Chen
Published: Quarterly (both in Print and Electronic form)

PUBLISHER:
The International Journal of Multimedia Data Engineering and Management 
(IJMDEM) is published by IGI Global (formerly Idea Group Inc.), publisher of 
the Information Science Reference (formerly Idea Group Reference) and Medical 
Information Science Reference imprints. For additional information regarding 
the publisher, please visit www.igi-global.com.

All submissions should be directed to the attention of:

Chengcui Zhang
Guest Editor
International Journal of Multimedia Data Engineering and Management
E-mail: zhang@cis.uab.edu

Top

7-6Special Issue on Multimedia Data Annotation and Retrieval using Web 2.0

Multimedia Tools and Applications Journal
Special Issue on Multimedia Data Annotation and Retrieval using Web 2.0
-----------------------------------------------------------------------------


Introduction
-------------

Currently, more than 80% of the information exchanged on the Web carry personal data and are primarily of multimedia nature (video, audio, images, etc.). Two main reasons are behind this phenomenon: 1) each of the major players on the Internet (individual users, companies, local and/or regional authorities, etc.) is both data producer and data consumer at the same time, and 2) the use of various tools of Web 2.0 (blogs, social networks, etc.) allows to ease sharing, accessing, and publishing information on the Web. While such tools provide users many easy-to-use functionalities, several issues, however, remain unaddressed. For instance, how to automate the processes of annotation and description of some photos using the annotations/descriptions provided by the user friends on his/her blog/wiki/social network? How to provide the user with more effective and expressive means to multimedia information retrieval? How to protect a multimedia data repository (e.g., photo album) wh!
ile several related information about the same content is already published (by some user friends) on the same or other blog/wiki/social network?
The general aim of this special issue is to assess the current approaches and technologies, as well as to outline the major challenges and future perspectives, related to the use of Web 2.0 in providing automatic annotation and easing retrieval and access control of multimedia data. It aims to provide an overview of the state of the art and future directions in this field, by including a wide range of interdisciplinary contributions from various research groups.

Topics
-------

Topics of interest include but are not limited to the following:

* Semantic Web and Web 2.0
* Social Networks
* Multimedia Semantics
* Contextual Multimedia Metadata
* Annotation Enriching
* Query Rewriting
* Metadata Modeling and Contextual Ontologies for Multimedia Applications
* Management of Multimedia Metadata (Relational and XML Databases, Semantic Stores, etc.)
* Multimedia Authoring
* Multimedia-based Access Control and Authorization
* Multimedia Retrieval
* Personalizing Multimedia Content
* Cross-media Clustering
* Mobile Applications
* Multimedia Web Applications and Related Metadata Support
* Novel and Challenging Multimedia Applications

Important Dates
----------------

Submission Deadline: November 15, 2010
Notification of First Round of Review: January 30, 2011
Submission of the Revised Manuscript: March 15, 2011
Notification of Final Acceptance: April 30, 2011
Camera-ready Submission: May 30, 2011

Submission
-----------

Authors are invited to submit contributions at the journal Online Submission System (
http://www.editorialmanager.com/mtap) not exceeding 8000 words (approx. 20 pages single-spaced) including diagrams and references with at least 10-point Times Roman like font. All submissions will undergo a blind peer review by at least three external expert reviewers to ensure a high standard of quality. Referees will consider originality, significance, technical soundness, clarity of exposition, and relevance to the special issue topics above.  Since a 'blind' paper evaluation method will be used, authors are kindly requested to produce and provide the full paper, WITHOUT any reference to any of the authors. The manuscript must contain, in its first page, the paper title, an abstract and a list of keywords but NO NAMES OR CONTACT DETAILS WHATSOEVER are to be included in any part of this file.

Guest Editors
-------------

Richard Chbeir (Bourgogne University, France,
richard.chbeir@u-bourgogne.fr)
Vincent Oria (New Jersey Institute of Technology, USA,
oria@homer.njit.edu)

Top

7-7A New Journal on Speech Sciences and call for Papers for Special issue on experimental prosody
Dear fellow prosodists,

It is with a special joy that Sandra Madureira and myself announce here the launching of a new electronic journal which follows the principles of the Directory of Open Source Journals (DOAJ)*. The Journal of Speech Sciences (<http://www.journalofspeechsciences.org>) is sponsored by the  Luso-Brazilian Association of Speech Sciences, an organisation founded in 2007 initially for helping organise Speech Prosody 2008.

This journal proposes to occupy an ecological niche not covered by other journals where our community can publish, especially as regards its strength in linguistic and linguistically-related aspects of speech sciences research (but also speech pathology, new metholodologies and techniques, etc). Another reason for its special place in the speech research ecosystem is optinality of language's choice. Though English is the journal main language, people wanting to disseminate their work in Portuguese and French can do that, provided that they add an extended abstract in English (a way to make their work more visible outside the luso- and francophone communities).

This journal was only made possible thanks to a great team working for the journal, and an exceptionally good editorial board. See the journal web page for that: <http://www.journalofspeechsciences.org>.

For its first issue we propose a special issue on Experimental Prosody. Please, see the Call for Papers below and send your paper to us!

All the best, Plinio (State Univ. of Campinas, Brazil) and Sandra (Catholic Univ. of São Paulo, Brazil)

* Official inscription to the DOAJ and ISSN number can only be done/attributed after the first issue.
--

Call for Papers

The Journal of Speech Sciences  (JoSS) is an open access journal which follows the principles of the Directory of Open Access Journals (DOAJ), meaning that its readers can freely read, download, copy, distribute, print, search, or link to the full texts of any article electronically published in the journal. It is accessible at <http://www.journalofspeechsciences.org>.

The JoSS covers experimental aspects that deal with scientific aspects of speech, language and linguistic communication processes. Coverage also includes articles dealing with pathological topics, or articles of an interdisciplinary nature, provided that experimental and linguistic principles underlie the work reported. Experimental approaches are emphasized in order to stimulate the development of new methodologies, of new annotated corpora, of new techniques aiming at fully testing current theories of speech production, perception, as well as phonetic and phonological theories and their interfaces.

The JoSS is supported by the initiative of the Luso-Brazilian Association of Speech Sciences (LBASS), <http://www.lbass.org>. Founded in the 16th of February 2007, the LBASS aims at promoting, stimulating and disseminating research and teaching in Speech Sciences in Brazil and Portugal, as well as establishing a channel between sister associations abroad.

The JoSS editorial team decided to launch the journal with a special issue on Experimental Prosody. The purpose of this Special Issue is to present recent progress and significant advances in areas of speech science devoted to experimental approaches in prosody research. Submitted papers must address a topic specific to experimental prosody in one of the following research areas:

Experimental Prosodic Phonology; Acoustics of prosody; Articulatory prosody; Perception of prosody; Prosody, discourse and dialogue; Emotion and Expression; Paralinguistic and nonlinguistic cues of prosody; Prosody physiology and pathology; Prosody acquisition; Prosody and the brain (especially neuro-imagery and EEG evidence of syntax-prosody interface and prosody functions); Corpus design and annotation for prosody research; Psycholinguistics of speech prosody processing.

Original, previously unpublished contributions will be reviewed by at least two independent reviewers, though the final decision as to publication is taken by the two editors. The primary language of the Journal is English. Contributions in Portuguese and in French are also accepted, provided a 1-page (circa 500 words) abstract in English be provided. The goal of this policy is to ensure a wide dissemination of quality research written in these two Romance languages.

For preparing the manuscript, please follow the instructions at the JSS webpage and submit it to the editors with the subject “Submission: Special Issue on Experimental Prosody”.

 
Editors

Plinio A. Barbosa (Speech Prosody Studies Group/State University of Campinas, Brazil)
Sandra Madureira (LIACC/Catholic University of São Paulo, Brazil)

 E-mail: {pabarbosa, smadureira}@journalofspeechsciences.org

 Important Dates

Submission deadline:             January 30th, 2011
Notification of acceptance:    March 10th, 2011
Final manuscript due:             March 25th, 2011
Publication date:                    April, 2011

Top

7-8Special issue Signal Processing : LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION

The journal Signal Processing published by Elsevier is issuing a call for a special issue on latent variable models and source separation. Papers dealing with multi-talker ASR and noise-robust ASR using source separation techniques are highly welcome.



                         SIGNAL PROCESSING
               http://www.elsevier.com/locate/sigpro

                          Special issue on
           LATENT VARIABLE ANALYSIS AND SIGNAL SEPARATION

                     DEADLINE: JANUARY 15, 2011


While independent component analysis and blind signal separation have become mainstream topics in signal and image processing, new approaches have emerged to solve problems involving nonlinear signal mixtures or various other types of latent variables, such as semi-blind models and matrix or tensor decompositions. All these recent topics lead to new developments and promising applications. They are the main goals of the conference LVA/ICA 2010 which took place in Saint-Malo, France, from September 27 to 30, 2010.

The aim of this special issue is to provide up to date developments on Latent Variable Analysis and Signal Separation, including theoretical analysis, algorithms and applications. Contributions are welcome both from attendees of the above conference and from authors who did not attend the conference but are active in these areas of research.

Examples of topics relevant to the special issue include:
- Non-negative matrix factorization
- Joint tensor factorization
- Latent variables
- Source separation
- Nonlinear ICA
- Noisy ICA
- BSS/ICA applications: image analysis, speech and audio data, encoding of natural scenes and sound, telecommunications, data mining, medical data processing, genomic data analysis, finance,...
- Unsolved and emerging problems: causality detection, feature selection, data mining,...

SUBMISSION INSTRUCTIONS:
Manuscript submissions shall be made through the Elsevier Editorial System (EES) at
http://ees.elsevier.com/sigpro/
Once logged in, click on “Submit New Manuscript” then select “Special Issue: LVA” in the “Choose Article Type” dropdown menu.

IMPORTANT DATES:
January 15, 2011: Manuscript submission deadline
May 15, 2011: Notification to authors
September 15, 2011: Final manuscript submission
December 15, 2011: Publication

GUEST EDITORS:
Vincent Vigneron, University of Evry – Val d’Essonne, France
Remi Gribonval, INRIA, France
Emmanuel Vincent, INRIA, France
Vicente Zarzoso, University of Nice – Sophia Antipolis, France
Terrence J. Sejnowski, Salk Institute, USA

Top

7-9ACM Trans. on Information Systems Special Issue on Searching Speech
Call for Papers
Special Issue on Searching Speech
http://tois.acm.org/announcement.html
Submission Deadline: 1 March 2011
 
ACM Transactions on Information Systems is soliciting contributions to a special issue on the topic of 'Searching Speech'. The special issue will be devoted to algorithms and systems that use speech recognition and other types of spoken audio processing techniques to retrieve information, and, in particular, to provide access to spoken audio content or multimedia content with a speech track.
 
The field of spoken content indexing and retrieval has a long history dating back to the development of the first broadcast news retrieval systems in the 1990s. More recently, however, work on searching speech has been moving towards spoken audio that is produced spontaneously and in conversational settings. In contrast to the planned speech that is typical for the broadcast news domain, spontaneous, conversational speech is characterized by high variability and the lack of inherent structure. Domains in which researchers face such challenges include: lectures, meetings, interviews, debates, conversational broadcast (e.g., talk-shows), podcasts, call center recordings, cultural heritage archives, social video on the Web, spoken natural language queries and the Spoken Web.
 
We invite the submission of papers that describe research in the following areas:
 • Integration of information retrieval algorithms with speech recognition and audio analysis techniques
 • Interfaces and techniques to improve user interaction with speech collections
 • Indexing diverse, large scale collections large scale collections
 • Search effectiveness and efficiency, including exploitation of additional information sources
 
For submission instructions, please refer to http://tois.acm.org/authors.html and add a comment that the submission is intended for the special issue on Searching Speech. ACM TOIS is a leading journal in the field of information retrieval, dedicated to publishing quality high papers on the design and evaluation of systems that find, organize, and analyze information. Authors should note that submissions building on previous conference or workshop publications are allowed, but must contain a minimum of 50% new material.
 
Important Dates:
Paper submission deadline: Tuesday, 1 March 2011
Author Notification date: Sunday, 1 May 2011
 
Guest Editors:
Franciska de Jong (University of Twente, Netherlands)
Wessel Kraaij (Radboud University Nijmegen, Netherlands & TNO, Netherlands)
Martha Larson (Delft University of Technology, Netherlands)
Steve Renals (University of Edinburgh, UK)
 
Top

7-10CfP IEEE Signal Processing Magazine: Special Issue on Fundamental Technologies in Modern Speech Recognition
CALL FOR PAPERS
IEEE Signal Processing Magazine
Special Issue on Fundamental Technologies in Modern Speech Recognition
		 
Guest Editors:		 
Sadaoki Furui   Tokyo Institute of Technology, Tokyo, Japan  
                (furui@cs.titech.ac.jp)
Li Deng         Microsoft Research, Redmond, USA (deng@microsoft.com)
Mark Gales      University of Cambridge, Cambridge, UK (mjfg@eng.cam.ac.uk)
Hermann Ney     RWTH Aachen University, Aachen, Germany
                (ney@cs.rwth-aachen.de)
Keiichi Tokuda  Nagoya Institute of Technology, Nagoya, Japan 
                (tokuda@nitech.ac.jp)

Recently, various statistical techniques that form the basis of fundamental technologies underlying today’s automatic speech recognition (ASR) research and applications have attracted new attentions. These techniques have significantly contributed to progress in ASR, including speaker recognition, and their various applications.  The purpose of this special issue is to bring together leading experts from various disciplines to explore the impact of statistical approaches on ASR.  The special issue will provide a comprehensive overview of recent developments and open problems.

This Call for Papers invites researchers to contribute articles that have a broad appeal to the signal processing community.  Such an article could be for example a tutorial of the fundamentals or a presentation of a state-of-the-art method.  Examples of the topics that could be addressed in the article include, but are not limited to:
 * Supervised, unsupervised, and lightly supervised training/adaptation
 * Speaker-adaptive and noise-adaptive training
 * Discriminative training
 * Large-margin based methods
 * Model complexity optimization
 * Dynamic Bayesian networks for various levels of speech modeling and decoding
 * Deep belief networks and related deep learning techniques
 * Sparse coding for speech feature extraction and modeling
 * Feature parameter compensation/normalization
 * Acoustic factorization
 * Conditional random fields (CRF) for modeling and decoding
 * Acoustic source separation by PCA and ICA
 * De-reverberation
 * Rapid language adaptation for multilingual speech recognition
 * Weighted-finite-state-transducer (WFST) based decoding
 * Uncertainty decoding
 * Speaker recognition, especially text-independent speaker verification
 * Statistical framework for human-computer dialogue modeling
 * Automatic speech summarization and information extraction

Submission Procedure:
Prospective authors should submit their white papers to the web submission system at http://mc.manuscriptcentral.com/spmag-ieee.

Schedule:
 * White paper due:         October 1, 2011
 * Invitation notification: November 1, 2011
 * Manuscript due:          February 1, 2012
 * Acceptance notification: April 1, 2012
 * Final manuscript due:    May 15, 2012
 * Publication date:        September 15, 2012 

Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA