ISCA - International Speech
Communication Association


ISCApad Archive  »  2014  »  ISCApad #190  »  Journals

ISCApad #190

Thursday, April 10, 2014 by Chris Wellekens

7 Journals
7-1Special Issue on Natural Language Processing 2014 (Int. Jl Adv. Comp.SCience and Applic.)

Call for papers:

Special Issue on Natural Language Processing 2014

INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS 

ISSN : 2156-5570(Online), ISSN : 2158-107X(Print)

DOI : 10.14569/issn.2156-5570

http://thesai.org/Publications/NLP2014

 

 

Scope of this Special Issue

This special issue of IJACSA aims to bring together articles that report advances in Natural Language Processing, both experimental as well practical applications, related to the exploitation and distillation of textual material for information access and knowledge creation. We are therefore calling for contributions in the areas of Automatic Text Summarization, Adaptable Information Extraction and Knowledge Population, Knowledge Induction from Text, Text Simplification, Text Entailment and Learning by Reading, and Natural Language Processing for the Social Media.

 

Paper submission

Manuscripts should be submitted via email to editorijacsa@thesai.org. Subject line should be: 'NLP_SpecialIssue: Paper Title'.

Research articles, review articles as well as communications are invited. Authors may submit their manuscripts as per the following schedule:

  • Submission Deadline: February 01, 2014
  • Review Notification: February 15, 2014
  • Registration Deadline: March 01, 2014
  • Camera Ready Submission: March 15, 2014
  • Final publication: April 01, 2013

 

This issue will be submitted for indexing in various databases, for more information please visit: http://thesai.org/Publications/Citations?code=IJACSA.

For more information on the special issue, please visit: http://thesai.org/Publications/NLP2014 or contact the Editor at editorijacsa@thesai.org.

 

Guest Editor: Dr T.V Prasad

 

Regards,

 

Back  Top

7-2Special Issue on 'Signal Processing Techniques for Assisted Listening' of IEEE SIGNAL PROCESSING MAGAZINE
CALL FOR PAPERS
IEEE Signal Processing Society
Special Issue
IEEE SIGNAL PROCESSING MAGAZINE
Special Issue on 'Signal Processing Techniques for Assisted Listening'

 

Aims and Scope
With the rapid advancement in microelectronics and parallel computing, significant computational power is nowadays readily available in ever smaller battery-operated consumer electronics devices. This has paved the way for applications such as active noise cancellation (ANC) headphones, hearing protectors, communication headsets, 3-D glasses, to name a few. In addition, hearing aids have also experienced large advances in electronics and functionality. To a large extent this rapid development can be attributed to the popularity of mobile phones, as these devices are no longer used merely as a communication tool but are multi-media and gaming platforms. Accordingly they require sophisticated processing for augmented reality in which the virtual listening world can be combined conveniently with situational acoustical awareness. The same can be said for assistive listening devices (ALDs), including hearing aids, personal sound amplification devices, and related audio capture accessories. Here the challenge is to render the sound as accessible as possible in order to provide hearing support in challenging acoustical situations. All aforementioned applications are underpinned by fundamental signal processing problems related to sound capture and sound rendering. On the one hand, for sound capture problems such as sensor technology (microphones, accelerometers etc.), acoustic scene analysis, audio signal enhancement, noise suppression with single and multiple sensors, feedback suppression and dereverberation need to be considered. On the other hand, sound rendering involves problems such as active noise cancellation, loudspeaker equalization (for mimicking or adapting outer-ear characteristics), 3-D audio rendering, acoustic scene visualization, automatic mixing and psycho-acoustical processing.

This special issue focuses on technical challenges of assisted listening from a signal processing perspective. Prospective authors are invited to contribute tutorial and survey articles that articulate signal processing methodologies which are critical for applying assisted listening techniques to mobile phones and other communication devices. Of particular interest is the role of signal processing in combining multimedia content, voice communication and voice pick-up in various real-world settings.

Tutorial and survey papers are solicited on advances in signal processing that particularly apply to the following applications:

  • Assistive listening devices, hearing aids and personal sound amplifiers
  • Communication devices
  • Hearing protection and active noise control
  • Navigation systems

These can include the following suggested topics as they relate to the above applications:

Signal processing for robust sound acquisition: Signal processing for acoustic rendering:
Speech enhancement/intelligibility improvement Signal spatialization/3D sound/automatic mixing
Speech separation/separation of non-stationary signals Motion compensation (head tracking, gps systems)
Reverberation reduction Environment-sensitive intelligibility improvement
Array signal processing, and distributed sensors Techniques for natural sound in headphones
Multi-modal acquisition methods  

Submission Process
Articles submitted to this special issue must contain significant relevance to advanced acoustic signal processing enabling assisted listening. All submissions will be peer reviewed according to the IEEE and Signal Processing Society guidelines for both publications. Submitted articles should not have been published or be under review elsewhere. Manuscripts should be submitted online at http://mc.manuscriptcentral.com/sps-ieee using the Manuscript Central interface. Submissions to this special issue of the IEEE SIGNAL PROCESSING MAGAZINE should have significant tutorial value. Prospective authors should consult the site http://www.signalprocessingsociety.org/publications/periodicals/spm/ for guidelines and information on paper submission.

Important Dates: Expected publication date for this special issue is March 2015.

Time Schedule Signal Processing Magazine
White paper (4 pages) due
Invitation notification
Manuscript submission due
Acceptance notification
Revised manuscript due
Final acceptance notification
Final material from authors
Publication date
February 10, 2014
February 24, 2014
May 15, 2014
July 8, 2014
August 20, 2014
September 20, 2014
November 8, 2014 (strict)
March 2015

Guest Editors
Sven Nordholm, Lead GE, Curtin University, Perth, Western Australia (s.nordholm@curtin.edu.au)
Walter Kellermann, Friedrich-Alexander University, Erlangen-Nuremberg, Germany (wk@lnt.de)
Simon Doclo, University of Oldenburg, Oldenburg, Germany (simon.doclo@uni-oldenburg.de)
Vesa Välimäki, Aalto University, Espoo, Finland (vesa.valimaki@alto.fi)
Shoji Makino, University of Tsukuba, Tsukuba, Japan (maki@tara.tsukuba.ac.jp)
John Hershey, Mitsubishi Electric Research Laboratories, Boston, USA (hershey@merl.com)

Back  Top

7-3Special Issue on Spatial Audio of IEEE Journal of Selected Topics in Signal Processing
CALL FOR PAPERS
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Spatial Audio

 

Spatial audio is an area that has gained in popularity in the recent years. Audio reproduction setups evolved from the traditional two-channel loudspeaker setups towards multi-channel loudspeaker setups. Advances in acoustic signal processing even made it possible to create a surround sound listening experience using traditional stereo speakers and headphones. Finally, there has been an increased interest in creating different sound zones in the same acoustic space (also referred to as personal audio). At the same time, the computational capacity provided by mobile audio playback devices has increased significantly. These developments enable new possibilities for advanced audio signal processing, such that in the future we can record, transmit and reproduce spatial audio in ways that have not been possible before. In addition, there have been fundamental advances in our understanding of 3D audio.

Due to the increasing number of different formats and reproduction systems for spatial audio, ranging from headphones to 22.2 speaker systems, it is major challenge to ensure interoperability between formats and systems, and consistent delivery of highquality spatial audio. Therefore, the MPEG committee is in the process of establishing new standards for 3D Audio Content Delivery.

The scope of this Special Issue on Spatial Audio is open to contributions ranging from the measurement and modeling of an acoustic space to reproduction and perception of spatial audio. While individual submissions may focus on any of the sub-topics listed below, papers describing a larger spatial audio signal processing systems will be considered as well.

We invite authors to address some of the following spatial audio aspects:

    • Capture of Spatial Sound, use of different microphone arrays to record 3D sound fields
    • Loudspeaker and Headphone Reproduction of Spatial Sound, including e.g. wave field synthesis, Ambisonics, arbitrary multi-channel loudspeaker setups, transaural and binaural systems, and personal audio systems
    • Spatial Sound Processing including e.g. downmixing, upmixing, spatial sound enhancement, and reverberation effects
    • Sound Source Localization and Room Geometry Estimation, advanced analysis of audio signals for reconstruction of the acoustic environment
    • Room Acoustics Modeling covering all different modeling techniques ranging from computationally heavy wave-based techniques and geometrical acoustics to lightweight perceptually-based models.

Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee.

Manuscript Submission: July 1, 2014
First Review Due: October 15, 2014
Revised Manuscript: December 1, 2014
Second Review Due: February 1, 2015
Final Manuscript: March 1, 2015

Guest Editors:
Lauri Savioja, Aalto University, Finland (Lauri.Savioja@aalto.fi)
Akio Ando, University of Toyama, Japan (andio@eng.u-toyama.ac.jp)
Ramani Duraiswami, University of Maryland, USA (ramani@umiacs.umd.edu)
Emanuël Habets, Int. Audio Laboratories Erlangen, Germany (emanuel.habets@audiolabs-erlangen.de)
Sascha Spors, Universität Rostock, Germany (sascha.spors@uni-rostock.de)

Back  Top

7-4CFP: CSL Special Issue on Speech and Language for Interactive Robots

CFP: CSL Special Issue on Speech and Language for Interactive Robots

Aims and Scope

Speech-based communication with robots faces important challenges for their application in real world scenarios. In contrast to conventional interactive systems, a talking robot always needs to take its physical environment into account when communicating with users. This is typically unstructured, dynamic and noisy and raises important challenges. The objective of this special issue is to highlight research that applies speech and language processing to robots that interact with people through speech as the main modality of interaction. For example, a robot may need to communicate with users via distant speech recognition and understanding with constantly changing degrees of noise. Alternatively, the robot may coordinate its verbal turn-taking behaviour with its non-verbal one such as generating speech and gestures at the same time. Speech and language technologies have the potential of equipping robots so that they can interact more naturally with humans, but their effectiveness remains to be demonstrated.  This special issue aims to help fill this gap.

The topics listed below indicate the range of work that is relevant to this special issue, where each article will normally represent one or more topics. In case of doubt about the relevance of your topic, please contact the special issue associate editors.

Topics

  • sound source localization
  • voice activity detection
  • speech recognition and understanding
  • speech emotion recognition
  • speaker and language recognition
  • spoken dialogue management
  • turn-taking in spoken dialogue
  • spoken information retrieval
  • spoken language generation
  • affective speech synthesis
  • multimodal communication
  • evaluation of speech-based human-robot interactions

Special Issue Associate Editors

Heriberto Cuayáhuitl, Heriot-Watt University, UK (contact: hc213@hw.ac.uk)
Kazunori Komatani, Nagoya University, Japan
Gabriel Skantze, KTH Royal Institute of Technology, Sweden

Paper Submission

All manuscripts and any supplementary materials will be submitted through Elsevier Editorial System at http://ees.elsevier.com/csl/. A detailed submission guideline is available as “Guide to Authors” at here. Please select “SI: SL4IR” as Article Type when submitting the manuscripts. For further details or a more in-depth discussion about topics or submission, please contact Guest Editors.

Dates

23 May 2014: Submission of manuscripts
23 August 2014: Notification about decisions on initial submissions
23 October 2014: Submission of revised manuscripts
10 January 2015: Notification about decisions on revised manuscripts
01 March 2015: Submission of manuscripts with final minor changes
31 March 2015: Announcement of the special issue articles on the CSL website http://www.journals.elsevier.com/computer-speech-and-language/

Back  Top

7-5CfP IEEE Journal of Selected Topics in Signal Processing: Special Issue on Spatial Audio
CALL FOR PAPERS
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Spatial Audio

 

Spatial audio is an area that has gained in popularity in the recent years. Audio reproduction setups evolved from the traditional two-channel loudspeaker setups towards multi-channel loudspeaker setups. Advances in acoustic signal processing even made it possible to create a surround sound listening experience using traditional stereo speakers and headphones. Finally, there has been an increased interest in creating different sound zones in the same acoustic space (also referred to as personal audio). At the same time, the computational capacity provided by mobile audio playback devices has increased significantly. These developments enable new possibilities for advanced audio signal processing, such that in the future we can record, transmit and reproduce spatial audio in ways that have not been possible before. In addition, there have been fundamental advances in our understanding of 3D audio.

Due to the increasing number of different formats and reproduction systems for spatial audio, ranging from headphones to 22.2 speaker systems, it is major challenge to ensure interoperability between formats and systems, and consistent delivery of high-quality spatial audio. Therefore, the MPEG committee is in the process of establishing new standards for 3D Audio Content Delivery.

The scope of this Special Issue on Spatial Audio is open to contributions ranging from the measurement and modeling of an acoustic space to reproduction and perception of spatial audio. While individual submissions may focus on any of the sub-topics listed below, papers describing a larger spatial audio signal processing systems will be considered as well.

We invite authors to address some of the following spatial audio aspects:

    • Capture of Spatial Sound, use of different microphone arrays to record 3D sound fields
    • Loudspeaker and Headphone Reproduction of Spatial Sound, including e.g. wave field synthesis, Ambisonics, arbitrary multi-channel loudspeaker setups, cross-talk cancellation systems, and personal audio systems
    • Spatial Sound Processing including e.g. downmixing, upmixing, spatial sound enhancement, and reverberation effects
    • Sound Source Localization and Room Geometry Estimation, advanced analysis of audio signals for reconstruction of the acoustic environment
    • Room Acoustics Modeling covering all different modeling techniques ranging from computationally heavy wave-based techniques and geometrical acoustics to lightweight perceptually-based models.

Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee.

Manuscript Submission: July 1, 2014
First Review Due: October 15, 2014
Revised Manuscript: December 1, 2014
Second Review Due: February 1, 2015
Final Manuscript: March 1, 2015

Guest Editors:
Lauri Savioja, Aalto University, Finland (Lauri.Savioja@aalto.fi)
Akio Ando, University of Toyama, Japan (andio@eng.u-toyama.ac.jp)
Ramani Duraiswami, University of Maryland, USA (ramani@umiacs.umd.edu)
Emanuël Habets, Int. Audio Laboratories Erlangen, Germany (emanuel.habets@audiolabs-erlangen.de)
Sascha Spors, Universität Rostock, Germany (sascha.spors@uni-rostock.de)

Back  Top

7-6CfP CSL Special Issue on Speech Production in Speech Technologies

Call for Papers: 

CSL Special Issue on Speech Production in Speech Technologies

The use of speech production knowledge and data to enhance speech recognition and other technologies is being actively pursued by a number of widely dispersed research groups using different approaches.  The types of speech production information may include continuous articulatory measurements or discrete-valued articulatory or phonological features.  These quantities might be directly measured, manually labeled, or unobserved but considered to be latent variables in a statistical model.  Applications of production-based ideas include improved speech recognition, silent speech interface, language training tools, and clinical models of speech disorders. 

The goal of this special issue is to highlight the current state of research efforts that use speech production data or knowledge.  The range of data, techniques, and applications currently being explored is growing, and is also benefiting from new ideas in machine learning, making this a particularly exciting time for this research. 

A recent workshop, the 2013 Interspeech satellite workshop on Speech Production in Automatic Speech Recognition (SPASR), as well as the special session on Articulatory Data Acquisition and Processing, brought together a number of researchers in this area.  This special issue will expand on the topics included in these events and beyond. 

Submissions focusing on research in this area are solicited. Topics of interest include, but are not limited to: 

- The collection, labeling, and use of speech production data 

- Acoustic-to-articulatory inversion 

- Speech production models in speech recognition, synthesis, voice conversion, and other technologies 

- Silent speech interfaces 

- Atypical speech production and pathology 

- Articulatory phonology and models of speech production 

  

Submission procedure

Prospective authors should follow the regular guidelines of the Computer Speech and Language Journal for electronic submission (http://ees.elsevier.com/csl). During submission authors must select 'SI: Speech Production in ST' as Article Type. 

 

Review procedure

All manuscripts will be submitted through the editorial submission system and will be reviewed by at least 3 experts.


Schedule:

June 1, 2014: Deadline for submissions 

August 1, 2014:  Notification of decision 

September 15, 2014: Deadline for resubmission 

November 1, 2014:  Final decision 

December 1, 2014: Deadline for camera-ready version 

February, 2015:  Publication 

 
Guest Editors:


Jeff Bilmes, U. Washington, bilmes@uw.edu

Eric Fosler-Lussier, Ohio State U., fosler@cse.ohio-state.edu

Mark Hasegawa-Johnson, U. Illinois at Urbana-Champaign, jhasegaw@uiuc.edu

Karen Livescu, TTI-Chicago, klivescu@ttic.edu

Frank Rudzicz, U. Toronto, frank@cs.toronto.edu


 
Back  Top

7-7Third Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) on Speech and Language Interaction for Daily Assistive Technology

Third Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) On

Speech and Language Interaction for Daily Assistive Technology

Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen

Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity.

Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons.

Topics of interest for submission to this special issue include (but are not limited to):

  • Speech, natural language and multimodal interfaces designed for people with physical or cognitive impairments
  • Applications of speech and NLP technology (automatic speech recognition, synthesis, dialogue, natural language generation) for AT applications
  • Novel modeling and machine learning approaches for AT applications
  • Long-term adaptation of speech/NLP based AT system to user's change
  • User studies, overview of speech/NLP technology for AT: understanding the user's needs and future speech and language based technologies.
  • Understanding, modeling and recognition of aged or disordered speech
  • Speech analysis and diagnosis: automatic recognition and detection of speech pathologies and speech capability loss
  • Speech-based distress recognition
  • Automated processing of symbol languages, sign language and nonverbal communication including translation systems.
  • Text and audio processing for improved comprehension and intelligibility, e.g., sentence simplification or text-to-speech
  • Evaluation methodology of systems and components in the lab and in the wild.
  • Resources; corpora and annotation schemes
  • Other topics in AAC, AAL, and AT

 

Submission process

Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/

Submissions should follow the journal's suggested writing format (http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review.

Important dates:

◦              Extended deadline for full paper submission: 28th April 2014

◦              Response to authors: 30th June 2014

◦              Revised submission deadline: 31st August 2014

◦              Notification of acceptance: 31st October 2014

◦              Final manuscripts due: 30th November 2014

 

 

 

 

 

Back  Top

7-8Revue TAL: numéro spécial numéro spécial sur le traitement automatique du langage parlé

Premier appel à communications: numéro spécial sur le traitement automatique du langage parlé pour la revue TAL (Traitement Automatique des Langues)

Direction : Laurent Besacier, Wolfang Minker
 
Date limite :  30 juin 2014
 
La communication orale reste le moyen le plus naturel pour dialoguer et interagir (avec la machine ou avec une autre personne). Le traitement automatique du langage parlé (TALP) et le dialogue trouvent désormais de nombreuses applications directes dans des domaines divers tels que (liste non exhaustive) la recherche d'information, l'interaction en langue naturelle avec des dispositifs mobiles, la robotique sociale, les technologies d'assistance à la personne, l'apprentissage des langues, etc. Cependant, le TALP pose des problèmes spécifiques liés à la nature même du matériau traité. En effet, on est amené à traiter des  énoncés de parole plus ou moins spontanée et contenant de nombreux traits paralinguistiques. Par exemple, la présence de disfluences orales (répétitions, reprises, incises...) réduit la régularité syntaxique des énoncés ; les énoncés oraux sont également riches d'informations liés aux affects, etc. Par ailleurs, l'étape de transcription automatique, souvent nécessaire avant l'application de traitements de plus haut niveau (compréhension, traduction, analyse, etc.) rend des sorties bruitées (contenant des erreurs) qui nécessitent des analyses robustes et un couplage étroit entre étapes de traitement.  
 

Nous invitons donc les contributions portant sur tout aspect (théorique, méthodologique et pratique) relatif au traitement automatique du langage parlé et à la communication orale, et en particulier (liste non exclusive) : 

  • Reconnaissance automatique de la parole 
  • Compréhension automatique de la parole
  • Traduction de parole
  • Synthèse de la parole
  • Dialogue oral homme - machine 
  • Analyse robuste de la langue parlée
  • Analyse des affects sociaux ou des émotions dans des énoncés oraux
  • Fouille de documents à composante orale
  • Applications à composantes orales (recherche d'information, interaction, robotique, etc)
  • Outils d'aide à l'apprentissage d’une langue seconde
  • Aspects multilingues du traitement automatique du langage parlé
  • Evaluation de systèmes de traitement du langage parlé
  • Corpus et ressources pour l'oral
  • Analyse du discours oral
  • Dialogue adaptatif au contexte et au profil de l'utilisateur
  • Analyse des traits paralinguistiques dans des énoncés oraux 

 

ÉDITEURS INVITÉS 

Laurent Besacier 
Wolfang Minker

 

COMITE SCIENTIFIQUE

Une version provisoire (à confirmer) des membres est disponible sur: http://tal-55-2.sciencesconf.org/resource/page/id/2 

 

LANGUE
Les articles sont écrits en français ou en anglais. Les soumissions en anglais ne sont acceptées que pour les auteurs non francophones.


FORMAT DE LA SOUMISSION

Les articles doivent être déposés sur la plateforme http://tal-55-2.sciencesconf.org/

La revue ne publie que des contributions originales, en français ou en anglais.

Les papiers acceptés feront au maximum 25 pages en PDF. Le style est disponible pour téléchargement sur le site du journal TAL 

 

CONTACT

Laurent Besacier     (Laurent.Besacier@imag.fr)

Wolfgang Minker      (Wolfgang.Minker@uni-ulm.de)

 
Back  Top

7-9Message of the editor of Speech Communication to the editorial board

Dear members of the editorial board of Speech Communication,

I would like to ask you for advice and help on identifying suitable topics and guest editors for future special issues of our journal. As you know, special issues serve several important functions, including the dissemination of key ideas in emerging research areas, and they usually receive much attention by members of our community. Overview/review papers introducing the special issue tend to be among the most widely cited papers.

Speech Communication has two special issues in 2014, one on 'Processing under-resourced languages' (published as vol. 56), and one on 'Gesture and Speech in Interaction' (published as vol. 57). Currently, there are no other special issues in the pipeline.

If you are aware of hot topics or emerging research areas that you think would be suitable for special issues, please let me know. Let us all keep our eyes and ears open at the upcoming major conferences (ICASSP, EUSIPCO, Interspeech, etc.) and their special sessions and satellite workshops. Please be proactive in talking to possible guest editors. Feel free to nominate yourself as a guest editor.

For your convenience, I append a list of recent special issues of Speech Communication.

Please send your immediate suggestions by April 22 to my email address moebius@coli.uni-saarland.de, but please consider it as a continuous action item on all of us too.


Regards,

Bernd Moebius, EiC
Speech Communication

Back  Top

7-10CfP IEEE Journal of Selected Topics in Signal Processing: Special Issue on Spatial Audio
CALL FOR PAPERS
IEEE Journal of Selected Topics in Signal Processing
Special Issue on Spatial Audio

 

Spatial audio is an area that has gained in popularity in the recent years. Audio reproduction setups evolved from the traditional two-channel loudspeaker setups towards multi-channel loudspeaker setups. Advances in acoustic signal processing even made it possible to create a surround sound listening experience using traditional stereo speakers and headphones. Finally, there has been an increased interest in creating different sound zones in the same acoustic space (also referred to as personal audio). At the same time, the computational capacity provided by mobile audio playback devices has increased significantly. These developments enable new possibilities for advanced audio signal processing, such that in the future we can record, transmit and reproduce spatial audio in ways that have not been possible before. In addition, there have been fundamental advances in our understanding of 3D audio.

Due to the increasing number of different formats and reproduction systems for spatial audio, ranging from headphones to 22.2 speaker systems, it is major challenge to ensure interoperability between formats and systems, and consistent delivery of high-quality spatial audio. Therefore, the MPEG committee is in the process of establishing new standards for 3D Audio Content Delivery.

The scope of this Special Issue on Spatial Audio is open to contributions ranging from the measurement and modeling of an acoustic space to reproduction and perception of spatial audio. While individual submissions may focus on any of the sub-topics listed below, papers describing a larger spatial audio signal processing systems will be considered as well.

We invite authors to address some of the following spatial audio aspects:

    • Capture of Spatial Sound, use of different microphone arrays to record 3D sound fields
    • Loudspeaker and Headphone Reproduction of Spatial Sound, including e.g. wave field synthesis, Ambisonics, arbitrary multi-channel loudspeaker setups, cross-talk cancellation systems, and personal audio systems
    • Spatial Sound Processing including e.g. downmixing, upmixing, spatial sound enhancement, and reverberation effects
    • Sound Source Localization and Room Geometry Estimation, advanced analysis of audio signals for reconstruction of the acoustic environment
    • Room Acoustics Modeling covering all different modeling techniques ranging from computationally heavy wave-based techniques and geometrical acoustics to lightweight perceptually-based models.

Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee.

Manuscript Submission: July 1, 2014
First Review Due: October 15, 2014
Revised Manuscript: December 1, 2014
Second Review Due: February 1, 2015
Final Manuscript: March 1, 2015

Guest Editors:
Lauri Savioja, Aalto University, Finland (Lauri.Savioja@aalto.fi)
Akio Ando, University of Toyama, Japan (andio@eng.u-toyama.ac.jp)
Ramani Duraiswami, University of Maryland, USA (ramani@umiacs.umd.edu)
Emanuël Habets, Int. Audio Laboratories Erlangen, Germany (emanuel.habets@audiolabs-erlangen.de)
Sascha Spors, Universität Rostock, Germany (sascha.spors@uni-rostock.de)

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA