ISCApad #188 |
Sunday, February 09, 2014 by Chris Wellekens |
7-1 | Multimedia Tools and Applications, Journal, Springer Special Issue on 'Content Based Multimedia Indexing' Multimedia Tools and Applications, Journal, Springer Special Issue on 'Content Based Multimedia Indexing' http://cbmi2013.mik.uni-pannon.hu/index.php/cfp ============================================================ Multimedia indexing systems aim at providing easy, fast and accurate access to large multimedia repositories. Research in Content-Based Multimedia Indexing covers a wide spectrum of topics in content analysis, content description, content adaptation and content retrieval. Various tools and techniques from different fields such as Data Indexing, Machine Learning, Pattern Recognition, and Human Computer Interaction have contributed to the success of multimedia systems. Although, there has been a significant progress in the field, we still face situations when the system shows limits in accuracy, generality and scalability. Hence, the goal of this special issue is to bring forward the recent advancements in content-based multimedia indexing. Topics of Interest ================== Topics of interest for the Special Issue include, but are not limited to: - Audio content extraction - Audio indexing (audio, speech, music) - Content-based search - Identification and tracking of semantic regions - Identification of semantic events - Large scale multimedia database management - Matching and similarity search - Metadata generation, coding and transformation, multi-modal fusion - Multimedia data mining - Multimedia interfaces, presentation and visualization tools - Multimedia recommendation - Multimedia retrieval (image, audio, video, ...) - Multi-modal and cross-modal indexing - Personalization and content adaptation - Summarization, browsing and organization of multimedia content - User interaction and relevance feedback - Visual content extraction - Visual indexing (image, video, graphics) Submission Details ================== All the papers should be full journal length versions and follow the guidelines set out by Multimedia Tools and Applications: http://www.springer.com/computer/information+systems/journal/11042. Manuscripts should be submitted online athttps://www.editorialmanager.com/mtap/ choosing 'Content Based Multimedia Indexing' as article type, no later than September 1st, 2013. When uploading your paper, please ensure that your manuscript is marked as being for this special issue. Information about the manuscript (title, full list of authors, corresponding author’s contact, abstract, and keywords) should also be sent to the corresponding editor Klaus Schoeffmann (ks@itec.uni-klu.ac.at). All the papers will be peer-reviewed following the MTAP reviewing procedures. Important Dates =============== Manuscript due: September 22nd, 2013 (extended)Notification: October 22nd, 2013 Publication date: First quarter 2014 Guest Editors ============= Klaus Schoeffmann, Klagenfurt University, Klagenfurt, Austria ks@itec.uni-klu.ac.at Tamás Szirányi, MTA SZTAKI, Budapest, Hungary sziranyi@sztaki.hu Jenny Benois-Pineau, University of Bordeaux 1, LABRI UMR 5800 Universities-Bordeaux-CNRS, France Jenny.benois@labri.fr Bernard Merialdo, EURECOM, Nice – Sophia Antipolis, France Bernard.Merialdo@eurecom.fr
| |||||||||||||||||||||
7-2 | EURASIP Journal: Special Issue on Atypical Speech & Voices: Corpora, Classification, Coaching & Conversion EURASIP Journal on Audio, Speech, and Music Processing
Special Issue on Atypical Speech & Voices: Corpora, Classification, Coaching & Conversion
With speech processing technology becoming more and more present in our every-day lives, it has become increasingly important to include all types of voices, speaking situations, and styles from all parts of our society, i.e., to move beyond the 'typical'.
Examples of such less typical patterns may include speaking while eating, during physical exercise, singing, as well as a wide range of pathological effects or speech generated by special aged groups (children, elderly).
In fact, recent advances in the field of Computational Paralinguistics allow for automatic recognition, analysis, and synthesis of an ever-increasing range of 'atypical' phenomena. At the same time, deeper analysis methods have opened doors to new assistive technologies, such as coaching systems, serious games, and tutoring systems, as well as diagnostic aids (e.g., for early detection of autism spectrum disorders, Alzheimer's or Parkinson's diseases). Tutoring systems, for example, have opened up new opportunities for voice professionals, such as public speakers, singers, and teachers, providing them with feedback on prosodic aspects, vibrato parameters, 'presence' or quality. Further, methods of speech/voice enhancement and conversion have enabled improvements in intelligibility of spoken content, as well as socio-emotional communication skills of e.g., speakers on the autism spectrum.
In this light and given the steadily growing research activities and their importance, we openly invite papers describing various aspects of analysis and synthesis of atypical speech and voices as well as their successful applications.
Submissions must not have been previously published and must have specific connection to audio, speech, and music processing.
The topics of particular interest will include, but are not limited to:
- Automatic Recognition of Atypical Speech & Voice Patterns
- Analysis of Atypical Speech, Singing & Voices
- Robustness in Automatic Speech Recognition against Atypical Phenomena
- Synthesis of Atypical Speech, Singing & Voices
- Enhancement and Conversion for Intelligibility Improvement of Atypical Speech & Voices
- Resources of Atypical Speech, Singing or Voice Patterns
- Multimodal Integration for Atypical Speech & Voice Processing (e.g., videolaryngoscopy, videokymography, fMRI, etc.)
- Tutoring Systems for Atypical Speech & Voice
- Serious Gaming Approaches in Atypical Speech & Voices
- Relationship Between Atypical Speech & Voices and Neurological Conditions
Submission Instructions:
Before submission authors should carefully read over the Instructions for Authors, which are located at asmp.eurasipjournals.com/authors/instructions. Prospective authors should submit an electronic copy of their complete manuscript through the SpringerOpen submission system at asmp.eurasipjournals.com/manuscript according to the submission schedule. They should choose the correct Special Issue in the 'sections' box upon submitting. In addition, they should specify the manuscript as a submission to the 'Special Issue on Atypical Speech & Voices' in the cover letter. All submissions will undergo initial screening by the Guest Editors for fit to the theme of the Special Issue and prospects for successfully negotiating the review process.
Guest Editors
Björn W. Schuller, Imperial College London, London, U.K. & TUM, Munich, Germany Email >bjoern.schuller@imperial.ac.uk
Tiago H. Falk, INRS-EMT, Montreal, Canada Email > falk@inrs.emt.ca
Vijay Parsa, University of Western Canada, London, Canada Email > parsa@nca.uwo.ca
Elmar Nöth, FAU Erlangen-Nuremberg, Germany & King Abdulaziz University, Jeddah, Saudi Arabia Email >noeth@cs.fau.de
| |||||||||||||||||||||
7-3 | Eurasip journal: Special issue on Models of Speech-In search of better representationsManuscript due: Nov. 1, 2013
Journal: EURASIP Journal on Audio, Speech, and Music Processing Description:
This special issue originates from a special session at the international conference Interspeech,
which was held in September 2010 at Chiba, Japan. It will publish some key contributions
presented at the conference describing different aspects of models of speech, from the analyisis
or representation point of view. Topics of interest include:
* Hidden Markov Models
* Kernel Methods
* Deep Neural Networks
* Linear Predicitve Analysis
Lead Guest Editor:
* Hansjörg Mixdorff, Beuth University, Berlin, Germany Guest Editor:
* Hideki Kawahara, Wakayama University, Wakayama, Japan
For more information about this special issue,
please visit: http://si.eurasip.org/issues/14/models-of-speech-in-search-of-better/
| |||||||||||||||||||||
7-4 | Eurasip Journal:CfP Special issue on Atypical Speech & Voices: Corpora, Classification, Coaching & ConversionManuscript due: Feb. 1, 2014 Journal: EURASIP Journal on Audio, Speech, and Music Processing Description: With speech processing technology becoming more and more present in our every-day lives, it has become increasingly important to include all types of voices, speaking situations, and styles from all parts of our society, i.e., to move beyond the “typical”. Examples of such less typical patterns may include speaking while eating, during physical exercise, singing, as well as a wide range of pathological effects or speech generated by special aged groups (children, elderly). In fact, recent advances in the field of Computational Paralinguistics allow for automatic recognition, analysis, and synthesis of an ever-increasing range of “atypical” phenomena. At the same time, deeper analysis methods have opened doors to new assistive technologies, such as coaching systems, serious games, and tutoring systems, as well as diagnostic aids (e.g., for early detection of autism spectrum disorders, Alzheimer’s or Parkinson’s diseases). Tutoring systems, for example, have opened up new opportunities for voice professionals, such as public speakers, singers, and teachers, providing them with feedback on prosodic aspects, vibrato parameters, “presence” or quality. Further, methods of speech/voice enhancement and conversion have enabled improvements in intelligibility of spoken content, as well as socio-emotional communication skills of e.g., speakers on the autism spectrum. In this light and given the steadily growing research activities and their importance, we openly invite papers describing various aspects of analysis and synthesis of atypical speech and voices as well as their successful applications. Submissions must not have been previously published and must have specific connection to audio, speech, and music processing. Topics of interest include: * Automatic Recognition of Atypical Speech & Voice Patterns * Analysis of Atypical Speech, Singing & Voices * Robustness in Automatic Speech Recognition against Atypical Phenomena * Synthesis of Atypical Speech, Singing & Voices * Enhancement and Conversion for Intelligibility Improvement of Atypical Speech & Voices * Resources of Atypical Speech, Singing or Voice Patterns * Multimodal Integration for Atypical Speech & Voice Processing (e.g., videolaryngoscopy, videokymography, fMRI, etc.) * Tutoring Systems for Atypical Speech & Voice * Serious Gaming Approaches in Atypical Speech & Voices * Relationship Between Atypical Speech & Voices and Neurological Conditions Lead Guest Editor: * Björn W. Schuller, Imperial College London, London, U.K. & TUM, Munich, Germany Guest Editors: * Tiago H. Falk, INRS-EMT, Montreal, Canada * Vijay Parsa, University of Western Canada, London, Canada * Elmar Nöth, FAU Erlangen-Nuremberg, Germany & King Abdulaziz University, Jeddah, Saudi Arabia For more information about this special issue, please visit: http://si.eurasip.org/issues/16/atypical-speech-voices-corpora-classification/
| |||||||||||||||||||||
7-5 | CfP Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Call for Papers - Special Issue of ACM Transactions on Accessible Computing (TACCESS) On Speech and Language Interaction for Daily Assistive Technology Guest Editors: François Portet, Frank Rudzicz, Jan Alexandersson, Heidi Christensen Assistive technologies (AT) allow individuals with disabilities to do things that would otherwise be difficult or impossible. Many assistive technologies involve providing universal access, such as modifications to televisions or telephones to make them accessible to those with vision or hearing impairments. An important sub-discipline within this community is Augmentative and Alternative Communication (AAC), which has its focus on communication technologies for those with impairments that interfere with some aspect of human communication, including spoken or written modalities. Another important sub-discipline is Ambient Assisted Living (AAL) which facilitates independent living; these technologies break down the barriers faced by people with physical or cognitive impairments and support their relatives and caregivers. These technologies are expected to improve quality-of-life of users and promote independence, accessibility, learning, and social connectivity. Speech and natural language processing (NLP) can be used in AT/AAC in a variety of ways including, improving the intelligibility of unintelligible speech, and providing communicative assistance for frail individuals or those with severe motor impairments. The range of applications and technologies in AAL that can rely on speech and NLP technologies is very large, and the number of individuals actively working within these research communities is growing, as evidenced by the successful INTERSPEECH 2013 satellite workshop on Speech and Language Processing for Assistive Technologies (SLPAT). In particular, one of the greatest challenges in AAL is to design smart spaces (e.g., at home, work, hospital) and intelligent companions that anticipate user needs and enable them to interact with and in their daily environment and provide ways to communicate with others. This technology can benefit each of visually-, physically-, speech- or cognitively- impaired persons. Topics of interest for submission to this special issue include (but are not limited to):
Submission process Contributions must not have been previously published or be under consideration for publication elsewhere, although substantial extensions of conference or workshop papers will be considered. as long as they adhere to ACM's minimum standards regarding prior publication (http://www.acm.org/pubs/sim_submissions.html). Studies involving experimentations with real target users will be appreciated. All submissions have to be prepared according to the Guide for Authors as published in the Journal website at http://www.rit.edu/gccis/taccess/. Submissions should follow the journal's suggested writing format (http://www.gccis.rit.edu/taccess/authors.html) and should be submitted through Manuscript Central http://mc.manuscriptcentral.com/taccess , indicating that the paper is intended for the Special Issue. All papers will be subject to the peer review process and final decisions regarding publication will be based on this review. Important dates: ◦ Full paper submission: 31st March 2014 ◦ Response to authors: 30th June 2014 ◦ Revised submission deadline: 31st August 2014 ◦ Notification of acceptance: 31st October 2014 ◦ Final manuscripts due: 30th November 2014
| |||||||||||||||||||||
7-6 | Special Issue IEEE SIGNAL PROCESSING MAGAZINE Signal Processing Techniques for Assisted Listening IEEE Signal Processing Society Special Issue IEEE SIGNAL PROCESSING MAGAZINE Special Issue on 'Signal Processing Techniques for Assisted Listening'
Aims and Scope This special issue focuses on technical challenges of assisted listening from a signal processing perspective. Prospective authors are invited to contribute tutorial and survey articles that articulate signal processing methodologies which are critical for applying assisted listening techniques to mobile phones and other communication devices. Of particular interest is the role of signal processing in combining multimedia content, voice communication and voice pick-up in various real-world settings. Tutorial and survey papers are solicited on advances in signal processing that particularly apply to the following applications:
These can include the following suggested topics as they relate to the above applications:
Submission Process Important Dates: Expected publication date for this special issue is March 2015.
Guest Editors
| |||||||||||||||||||||
7-7 | CfP Special issue of TIPA on PROMINENCES and SPOKEN LANGUAGE CALL FOR PAPERS
TIPA: Travaux interdisciplinaires sur la parole et le langage
The 30th issue of the TIPA journal is to appear in December 2014, on the following topic:
PROMINENCES and SPOKEN LANGUAGE
Invited editor: Sophie Herment, Laboratoire Parole et Langage, Aix-Marseille Université
works on speech and language. Articles dealing with prominences and spoken language will be welcome. We invite submissions based on various backgrounds and linguistic fields: prosody, phonetics, phonology, syntax, morphology, pragmatics, dialectology, diachrony. The term prominence encompasses many different meanings which will hopefully be dealt with in the various contributions. In spoken language, prominences can range from emphasis, focalisation, pitch accents, metric entities such as strong syllables or strong feet, etc. Articles dealing with interfaces will also be of great interest: how are syntactic prominences realized in the spoken language? How is information structure related to prominence? Are certain morphological constituents more prominent than others? etc. Papers building on different perspectives will be considered: corpus-based approaches, theoretical linguistics, psycholinguistics, neurolinguistics, automatic language processing, sociolinguistics, language teaching... These aspects are by no means exhaustive, and all related issues or approaches that can shed light on the topic will be considered.
The language of publication will be either English or French. Each article should contain a detailed two-page abstract in the other language, in order to make papers in French more accessible to English-speaking readers, and vice versa, thus insuring a larger audience for all the articles.
Important dates: February 29: deadline for abstract submission March 30: notification of acceptance June 30: deadline for submission of articles December: publication.
Submission guidelines: Please send your proposal in 2 files to: tipa@lpl-aix.fr - one in .doc containing the title, name and affiliation of the author(s). - the other anonymous in .pdf: it should not be longer than one A4 page (in times 12) and contain: the title; ½ page introducing the subject of the research and the theoretical / methodological framework; ½ page accounting for the main results. This one-page abstract can be followed by a short bibliography (5 or 6 titles; the author(s) of the proposal should not appear more than twice). Instructions for authors can be found at http://www.lpl.univ-aix.fr/index.php?id=27
| |||||||||||||||||||||
7-8 | Special Issue on Natural Language Processing 2014 (Int. Jl Adv. Comp.SCience and Applic.) Call for papers: Special Issue on Natural Language Processing 2014 INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS ISSN : 2156-5570(Online), ISSN : 2158-107X(Print) DOI : 10.14569/issn.2156-5570 http://thesai.org/Publications/NLP2014
Scope of this Special Issue This special issue of IJACSA aims to bring together articles that report advances in Natural Language Processing, both experimental as well practical applications, related to the exploitation and distillation of textual material for information access and knowledge creation. We are therefore calling for contributions in the areas of Automatic Text Summarization, Adaptable Information Extraction and Knowledge Population, Knowledge Induction from Text, Text Simplification, Text Entailment and Learning by Reading, and Natural Language Processing for the Social Media.
Paper submission Manuscripts should be submitted via email to editorijacsa@thesai.org. Subject line should be: 'NLP_SpecialIssue: Paper Title'. Research articles, review articles as well as communications are invited. Authors may submit their manuscripts as per the following schedule:
This issue will be submitted for indexing in various databases, for more information please visit: http://thesai.org/Publications/Citations?code=IJACSA. For more information on the special issue, please visit: http://thesai.org/Publications/NLP2014 or contact the Editor at editorijacsa@thesai.org.
Guest Editor: Dr T.V Prasad
Regards,
| |||||||||||||||||||||
7-9 | Special Issue on 'Signal Processing Techniques for Assisted Listening' of IEEE SIGNAL PROCESSING MAGAZINE IEEE Signal Processing Society Special Issue IEEE SIGNAL PROCESSING MAGAZINE Special Issue on 'Signal Processing Techniques for Assisted Listening'
Aims and Scope This special issue focuses on technical challenges of assisted listening from a signal processing perspective. Prospective authors are invited to contribute tutorial and survey articles that articulate signal processing methodologies which are critical for applying assisted listening techniques to mobile phones and other communication devices. Of particular interest is the role of signal processing in combining multimedia content, voice communication and voice pick-up in various real-world settings. Tutorial and survey papers are solicited on advances in signal processing that particularly apply to the following applications:
These can include the following suggested topics as they relate to the above applications:
Submission Process Important Dates: Expected publication date for this special issue is March 2015.
Guest Editors
| |||||||||||||||||||||
7-10 | Special Issue on Spatial Audio of IEEE Journal of Selected Topics in Signal Processing IEEE Journal of Selected Topics in Signal Processing Special Issue on Spatial Audio
Spatial audio is an area that has gained in popularity in the recent years. Audio reproduction setups evolved from the traditional two-channel loudspeaker setups towards multi-channel loudspeaker setups. Advances in acoustic signal processing even made it possible to create a surround sound listening experience using traditional stereo speakers and headphones. Finally, there has been an increased interest in creating different sound zones in the same acoustic space (also referred to as personal audio). At the same time, the computational capacity provided by mobile audio playback devices has increased significantly. These developments enable new possibilities for advanced audio signal processing, such that in the future we can record, transmit and reproduce spatial audio in ways that have not been possible before. In addition, there have been fundamental advances in our understanding of 3D audio. Due to the increasing number of different formats and reproduction systems for spatial audio, ranging from headphones to 22.2 speaker systems, it is major challenge to ensure interoperability between formats and systems, and consistent delivery of highquality spatial audio. Therefore, the MPEG committee is in the process of establishing new standards for 3D Audio Content Delivery. The scope of this Special Issue on Spatial Audio is open to contributions ranging from the measurement and modeling of an acoustic space to reproduction and perception of spatial audio. While individual submissions may focus on any of the sub-topics listed below, papers describing a larger spatial audio signal processing systems will be considered as well. We invite authors to address some of the following spatial audio aspects:
Prospective authors should visit http://www.signalprocessingsociety.org/publications/periodicals/jstsp/ for information on paper submission. Manuscripts should be submitted at http://mc.manuscriptcentral.com/jstsp-ieee. Manuscript Submission: July 1, 2014 Guest Editors:
| |||||||||||||||||||||
7-11 | CFP: CSL Special Issue on Speech and Language for Interactive RobotsCFP: CSL Special Issue on Speech and Language for Interactive RobotsAims and Scope
Speech-based communication with robots faces important challenges for their application in real world scenarios. In contrast to conventional interactive systems, a talking robot always needs to take its physical environment into account when communicating with users. This is typically unstructured, dynamic and noisy and raises important challenges. The objective of this special issue is to highlight research that applies speech and language processing to robots that interact with people through speech as the main modality of interaction. For example, a robot may need to communicate with users via distant speech recognition and understanding with constantly changing degrees of noise. Alternatively, the robot may coordinate its verbal turn-taking behaviour with its non-verbal one such as generating speech and gestures at the same time. Speech and language technologies have the potential of equipping robots so that they can interact more naturally with humans, but their effectiveness remains to be demonstrated. This special issue aims to help fill this gap. The topics listed below indicate the range of work that is relevant to this special issue, where each article will normally represent one or more topics. In case of doubt about the relevance of your topic, please contact the special issue associate editors. Topics
Special Issue Associate Editors Heriberto Cuayáhuitl, Heriot-Watt University, UK (contact: hc213@hw.ac.uk) Paper Submission All manuscripts and any supplementary materials will be submitted through Elsevier Editorial System at http://ees.elsevier.com/csl/. A detailed submission guideline is available as “Guide to Authors” at here. Please select “SI: SL4IR” as Article Type when submitting the manuscripts. For further details or a more in-depth discussion about topics or submission, please contact Guest Editors. Dates 23 May 2014: Submission of manuscripts
|