ISCApad #172 |
Sunday, October 07, 2012 by Chris Wellekens |
3-1-1 | (2013-01-21) Multimodality and Multilinguism: new Challenges for the study of Oral Communication (MaMChOC), Venezia Italy AISV
| ||
3-1-2 | (2013-06-19) CfP ISCA Workshop on Non-Linear Speech Processing (NOLISP 2013) ISCA Workshop on Non-Linear Speech Processing (NOLISP 2013) June 19-21, 2013 University of Mons, Belgium http://www.tcts.fpms.ac.be/nolisp2013/
INTRODUCTION
Dear Colleagues,
We are pleased to cordially invite you to participate in the upcoming ISCA Workshop on Non-Linear Speech Processing (NOLISP2013), which will be held at the University of Mons, Belgium, on the following dates: June 19-21, 2013. The workshop will cover all topics which come under the area of digital speech processing and its applications, with an emphasis on non-linear techniques. In accordance with the spirit of the ISCA workshops, the upcoming workshop will focus on research and results, give information on tools and welcome prototype demonstrations of potential future applications.
We are glad to inform you that the proceedings of the workshop will be published in the Springer Lecture Notes in Artificial Intelligence, and that the workshop will be followed by a special issue in the Computer Speech and Language journal.
We are also happy to welcome four very interesting invited speakers who will give a talk in their expertise area: Prof. Steve Renals, Prof. Christophe d’Alessandro, Prof. Björn Schuller, and Prof. Yannis Stylianou.
The workshop is being organized by Dr. Thomas Drugman and Prof. Thierry Dutoit, from the University of Mons.
We look forward to your participation!
WORKSHOP THEMES
The Non-Linear Speech Processing (NOLISP) workshop is a biennial international workshop aiming at presenting and discussing new ideas, techniques and results related to alternative approaches in speech processing. New and innovative approaches and their applications are welcome to participate in this workshop.
Contributions are expected in (though not restricted to) the following domains: •Non-Linear Approximation and Estimation •Non-Linear Oscillators and Predictors •Higher-Order Statistics •Independent Component Analysis •Nearest Neighbours •Neural Networks •Decision Trees •Non-Parametric Models •Dynamics of Non-Linear Systems •Fractal Methods •Chaos Modeling •Non-Linear Differential Equations
All fields of speech processing are targeted by the workshop, namely: •Speech Production •Speech Analysis and Modeling •Speech Coding •Speech Synthesis •Speech Recognition •Speaker Identification / Verification •Speech Enhancement / Separation •Speech Perception •Others
IMPORTANT DATES
Submission of Regular Papers February 22, 2013 Notification of Paper Acceptance March 29, 2013 Revised Paper Upload Deadline April 10, 2013 Early Registration Deadline May 3, 2013 Registration Deadline May 31, 2013
INVITED SPEAKERS
The NOLISP2013 workshop is glad to welcome the four following invited speakers who are each experts in their domain:
• Prof. Steve Renals, Centre for Speech Technology Research, University of Edinburgh, UK: Automatic Speech Recognition
• Prof. Christophe d’Alessandro, LIMSI-CNRS, Paris, France: Speech Synthesis and Voice Quality
• Prof. Björn Schuller, TUM, Munich, Germany: Emotive Speech Processing
• Prof. Yannis Stylianou, University of Crete, Heraklion, Greece: Speech Processing and Medical Application
PROCEEDINGS AND SPECIAL ISSUE
NOLISP 2013 proceedings will be published in the Lecture Notes in Artificial Intelligence (LNCS/LNAI, Springer). We are also happy to let you know that the workshop will be followed by a special issue in the Computer Speech and Language journal.
| ||
3-1-3 | (2013-08-23) INTERSPEECH 2013 Lyon France Interspeech 2013 Lyon, France 25-29 August 2013 General Chair: Frédéric Bimbot
| ||
3-1-4 | (2013-08-25) Call for Satellite Workshops during INTERSPEECH 2013 Call for Satellite Workshops during INTERSPEECH 2013
| ||
3-1-5 | (2014-09-07) INTERSPEECH 2014 Singapore
| ||
3-1-6 | (2015) INTERSPEECH 2015 Dresden RFA Conference Chair: Sebastian Möller, Technische Universität Berlin
|
3-2-1 | (2012-11-28) International Workshop on Spoken Dialog Systems (IWSDS 2012) Towards a Natural Interaction with Robots, Knowbots and Smartphones.Paris, France International Workshop on Spoken Dialog Systems (IWSDS 2012)
| ||
3-2-2 | (2012-12-05) The 8th International Symposium on Chinese Spoken Language Processing, Hong Kong5-8 December 2012:
The 8th International Symposium on Chinese Spoken Language Processing, Hong Kong The 8th International Symposium on Chinese Spoken Language Processing will be held in
Hong Kong on 5-8 December, 2012. ISCSLP is a biennial conference for scientists, researchers, and practitioners to report and
discuss the latest progress in all theoretical and technological aspects of spoken language
processing. ISCSLP is also the flagship conference of ISCA Special Interest Group on Chinese
Spoken Language Processing (SIG-CSLP). While the ISCSLP is focused primarily on Chinese
languages, works on other languages that may be applied to Chinese speech and language
are also encouraged. The working language of ISCSLP is English. Details can be found on the conference website http://www.iscslp2012.org/
|
3-3-1 | (2012-10-01) Human Activity and Vision Summer School, INRIA, Sophia Antipolis, FranceHuman Activity and Vision Summer School - Monday 1st to Friday 5th of October 2012 - INRIA, Sophia-Antipolis/Nice on the French Riviera - website: http://www.multitel.be/events/human-activity-and-vision-summer-school == Overview The Human Activity and Vision Summer School will address the broad domains of human activity modeling and human behavior recognition, with an emphasis on vision sensors as capturing modality. Courses will comprise both tutorials and presentations of state-of-the-art methods by active researchers in the field. The goal of the courses will be to cover most of the whole human activity analysis chain, starting from the low level processing of videos and audio for detection and feature extraction, to medium level (tracking and behavior cue extraction) and higher level modeling and recognition using both supervised and unsupervised techniques. Applications of the different methods to action and activity recognition in different domains ranging from Activities of Daily Living to surveillance (individual behavior recognition, crowd monitoring) will be considered. Presentation of real uses cases, market needs, and current bottlenecks in the surveillance domain will also be addressed, with one half day devoted to presentations and panel discussions with professional and industrial presenters. See list of topics and speaker below. == Audience The summer school is open to young researchers (in particular master or Ph.D. students) and researchers from both the academia and industry working or interested in the human activity analysis domain or connected fields like surveillance. == Application/Registration The registration is Euros 300. This includes all the courses, coffee breaks and lunch. The fee does not include accommodation or dinners. A limited number of cheap accommodations for students are available. To apply for a position at the Summer School and find more practical information, please go to: http://www.multitel.be/events/human-activity-and-vision-summer-school == List of topics and confirmed speakers * Object detection and tracking - Francois Fleuret (Idiap Research Institute) - Alberto del Bimbo and Federico Pernici (Università di Firenze) - Cyril Carincotte (Multitel) - Jean-Marc Odobez (Idiap research Institute) * Crowd analysis and Simulation - Mubarak Shah (University of Central Florida) - Paola Goatin (INRIA) - Cyril Carincotte (Multitel) * Action and behavior recognition - Ivan Laptev (INRIA) - Ben Krose (University of Amsterdam) - Francois Bremond (INRIA) * Social Behavior Analysis - Elisabeth Oberzaucher (University of Vienna) - Hayley Hung (University of Amsterdam) * Unsupervised activity discovery and active learning - Tao Xiang (University of Queen Mary) - Jean-Marc Odobez and Remi Emonet (IDIAP) * Body and head Pose estimation - Cheng Chen (Idiap Research Institute) - Guillaume Charpiat (INRIA) * Audio processing - Maurizio Omologo (Foundation Bruno Kessler) - Bertrand Ravera (Thales Communication France) Jean-Marc Odobez, IDIAP Senior Researcher, EPFL Maitre d'Enseignement et de Recherche (MER) IDIAP Research Institute (http://www.idiap.ch) Tel: +41 (0)27 721 77 26 Web:http://www.idiap.ch/~odobez
| ||||||||||||
3-3-2 | (2012-10-22) cfp participation and papers/ 2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012)2nd International Audio/Visual Emotion Challenge and Workshop (AVEC 2012) in conjunction with ACM ICMI 2012, October 22, Santa Monica, California, USA http://sspnet.eu/avec2012/ http://www.acm.org/icmi/2012/ Register and download data and features: http://avec-db.sspnet.eu/accounts/register/ _____________________________________________________________ Scope The Audio/Visual Emotion Challenge and Workshop (AVEC 2012) will be the second competition event aimed at comparison of multimedia processing and machine learning methods for automatic audio, visual and audiovisual emotion analysis, with all participants competing under strictly the same conditions. The goal of the Challenge is to provide a common benchmark test set for individual multimodal information processing and to bring together the audio and video emotion recognition communities, to compare the relative merits of the two approaches to emotion recognition under well-defined and strictly comparable conditions and establish to what extent fusion of the approaches is possible and beneficial. A second motivation is the need to advance emotion recognition systems to be able to deal with naturalistic behavior in large volumes of un-segmented, non-prototypical and non-preselected data as this is exactly the type of data that both multimedia retrieval and human-machine/human-robot communication interfaces have to face in the real world. We are calling for teams to participate in emotion recognition from acoustic audio analysis, linguistic audio analysis, video analysis, or any combination of these. As benchmarking database the SEMAINE database of naturalistic video and audio of human-agent interactions, along with labels for four affect dimensions will be used. Emotion will have to be recognized in terms of continuous time, continuous valued dimensional affect in the dimensions arousal, expectation, power and valence. Two Sub-Challenges are addressed: The Word-Level Sub-Challenge requires participants to predict the level of affect at word-level and only when the user is speaking. The Fully Continuous Sub-Challenge involves fully continuous affect recognition, where the level of affect has to be predicted for every moment of the recording. Besides participation in the Challenge we are calling for papers addressing the overall topics of this workshop, in particular works that address the differences between audio and video processing of emotive data, and the issues concerning combined audio-visual emotion recognition Topics include, but are not limited to: Audio/Visual Emotion Recognition: . Audio-based Emotion Recognition . Linguistics-based Emotion Recognition . Video-based Emotion Recognition . Social Signals in Emotion Recognition . Multi-task learning of Multiple Dimensions . Novel Fusion Techniques as by Prediction . Cross-corpus Feature Relevance . Agglomeration of Learning Data . Semi- and Unsupervised Learning . Synthesized Training Material . Context in Audio/Visual Emotion Recognition . Multiple Rater Ambiguity Application: . Multimedia Coding and Retrieval . Usability of Audio/Visual Emotion Recognition . Real-time Issues Important Dates ___________________________________________ Paper submission July 31, 2012 Notification of acceptance August 14, 2012 Camera ready paper and final challenge result submission August 18, 2012 Workshop October 22, 2012 Organisers ___________________________________________ Björn Schuller (Tech. Univ. Munich, Germany) Michel Valstar University of Nottingham, UK) Roddy Cowie (Queen's University Belfast, UK) Maja Pantic (Imperial College London, UK) Program Committee ___________________________________________ Elisabeth André, Universität Augsburg, Germany Anton Batliner, Universität Erlangen-Nuremberg, Germany Felix Burkhardt, Deutsche Telekom, Germany Rama Chellappa, University of Maryland, USA Fang Chen, NICTA, Australia Mohamed Chetouani, Institut des Systèmes Intelligents et de Robotique (ISIR), Fance Laurence Devillers, Laboratoire d'Informatique pour la Mécanique et les Sciences de l'Ingénieur (LIMSI), France Julien Epps, University of New South Wales, Australia Anna Esposito, International Institute for Advanced Scientific Studies, Italy Raul Fernandez, IBM, USA Roland Göcke, Australian National University, Australia Hatice Gunes, Queen Mary University London, UK Julia Hirschberg, Columbia University, USA Aleix Martinez, Ohio State University, USA Marc Méhu, University of Geneva, Switzerland Marcello Mortillaro, University of Geneva, Switzerland Matti Pietikainen, University of Oulu, Finland Ioannis Pitas, University of Thessaloniki, Greece Peter Robinson, University of Cambridge, UK Stefan Steidl, Uinversität Erlangen-Nuremberg, Germany Jianhua Tao, Chinese Academy of Sciences, China Fernando de la Torre, Carnegie Mellon University, USA Mohan Trivedi, University of California San Diego, USA Matthew Turk, University of California Santa Barbara, USA Alessandro Vinciarelli, University of Glasgow, UK Stefanos Zafeiriou, Imperial College London, UK Please regularly visit our website http://sspnet.eu/avec2012 for more information.
| ||||||||||||
3-3-3 | (2012-10-26) CfP Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech, Dublin Ireland Call for Papers for the Interdisciplinary Workshop on Laughter and other Non-Verbal Vocalisations in Speech
| ||||||||||||
3-3-4 | (2012-10-26) ICMI-2012 Workshop on Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents, S.Monica, CA, USA ICMI-2012 Workshop on Speech and Gesture Production in Virtually and Physically Embodied Conversational Agents
CONFERENCE: 14th ACM International Conference on Multimodal Interaction (ICMI-2012)
LOCATION: Santa Monica, California, USA
IMPORTANT DATES:
* Submission deadline: Monday, June 4, 2012
* Notification: Monday, July 30, 2012
* Camera-ready deadline: Monday, September 10, 2012
* Workshop: Friday, October 26, 2012
DESCRIPTION:
This full day workshop aims to bring together researchers from the embodied conversational agent (ECA) and sociable robotics communities to spark discussion and collaboration between the related fields. The focus of the workshop will be on co-verbal behavior production — specifically, synchronized speech and gesture — for both virtually and physically embodied platforms. It will elucidate the subject in consideration of aspects regarding planning and realization of multimodal behavior production. Topics discussed will highlight common and distinguishing factors of their implementations within each respective field. The workshop will feature a panel discussion with experts from the relevant communities, and a breakout session encouraging participants to identify design and implementation principles common to both virtually and physically embodied sociable agents.
TOPICS:
Under the focus of speech-gesture-based multimodal human-agent interaction, the workshop invites submissions describing original work, either completed or still in progress, related to one or more of the following topics:
* Computational approaches to:
- Content and behavior planning, e.g., rule-based or probabilistic models
- Behavior realization for virtual agents or sociable robots
* From ECAs to physical robots: potential and challenges of cross-platform approaches
* Behavior specification languages and standards, e.g., FML, BML, MURML
* Speech-gesture synchronization, e.g., open-loop vs. closed-loop approaches
* Situatedness within social/environmental contexts
* Feedback-based user adaptation
* Cognitive modeling of gesture and speech
SUBMISSIONS:
Workshop contributions should be submitted via e-mail in the ACM publication style to icmi2012ws.speech.gesture@gmail.com in one of the following formats:
* Full paper (5-6 pages, PDF file)
* Short position paper (2-4 pages, PDF file)
* Demo video (1-3 minutes, common file formats, e.g., AVI or MP4) including an extended abstract (1-2 pages, PDF file)
If a submission exceeds 10MB, it should be made available online and a URL should be provided instead.
Submitted papers and abstracts should conform to the ACM publication style; for templates and examples, follow the link: http://www.acm.org/sigs/pubs/proceed/template.html.
Accepted papers will be included in the workshop proceedings in ACM Digital Library; video submissions and accompanying abstracts will be published on the workshop website. Contributors will be invited to give either an oral or a video presentation at the workshop.
PROGRAM COMMITTEE:
* Dan Bohus (Microsoft Research)
* Kerstin Dautenhahn (University of Hertfordshire)
* Jonathan Gratch (USC Institute for Creative Technologies)
* Alexis Heloir (German Research Center for Artificial Intelligence)
* Takayuki Kanda (ATR Intelligent Robotics and Communication Laboratories)
* Jina Lee (Sandia National Laboratories)
* Stacy Marsella (USC Institute for Creative Technologies)
* Maja Matarić (University of Southern California)
* Louis-Philippe Morency (USC Institute for Creative Technologies)
* Bilge Mutlu (University of Wisconsin-Madison)
* Victor Ng-Thow-Hing (Honda Research Institute USA)
* Catherine Pelachaud (TELECOM ParisTech)
WORKSHOP ORGANIZERS:
* Ross Mead (University of Southern California)
* Maha Salem (Bielefeld University)
CONTACT:
* Workshop Questions and Submissions (icmi2012ws.speech.gesture@gmail.com)
* Ross Mead (rossmead@usc.edu)
* Maha Salem (msalem@cor-lab.uni-bielefeld.de)
| ||||||||||||
3-3-5 | (2012-10-29) Workshop on Audio and Multimedia Methods for Large‐Scale Video Analysis, Nara, Japan Audio and Multimedia Methods for Large‐Scale Video Analysis First ACM International Workshop at ACM Multimedia 2012 ***Extended submission deadline: July 15th 2012 *** Media sharing sites on the Internet and the one‐click upload ca‐ The diversity in content, recording equipment, environment, qual‐ The goal of the 1st ACM International Workshop on Audio and Mul‐ Topics include novel acoustic and multimedia methods for Submissions: Workshop submissions of 4‐6 pages should be format‐ Important dates: Organizers: Panel Chair:
| ||||||||||||
3-3-6 | (2012-11-01) AMTA Workshop on Translation and Social Media (TSM 2012) AMTA Workshop on Translation and Social Media
(TSM 2012) Call for Papers November 1st, 2012 San Diego, CA, USA http://www.eu-bridge.eu/tsm_amta2012.php -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
--------------- The Workshop ---------------
During the last couple of years, user generated content on the World Wide Web has increased significantly. Users post status updates, comments, news and observations on services like Twitter; they communicate with networks of friends through web pages like Facebook; and they produce and publish audio and audio-visual content, such as comments, lectures or entertainment in the form of videos on platforms such as YouTube, and as Podcasts, e.g., via iTunes.
Nowadays, users do not publish content mainly in English anymore, instead they publish in a multitude of languages. This means that due to the language barrier, many users cannot access all available content. The use of machine and speech translation technology can help bridge the language barrier in these situations. However, in order to automatically translate these new domains we expect several obstacles to be overcome: · Speech recognition and translation systems need to be able to rapidly adapt to rapidly changing topics as user generated content shifts in focus and topic. · Text and speech in social media will be extremely noisy, ungrammatical and will not adhere to conventional rules, instead following its own, continuously changing conventions. At the same time we expect to discover new possibilities to exploit social media content for improving speech recognition and translation systems in an opportunistic way, e.g., by finding and utilizing parallel corpora in multiple languages addressing the same topics, or by utilizing additional meta-information available to the content, such as tags, comments, key-word lists. Also, the network structure in social media could provide valuable information in translating its content. The goal of this workshop is to bring together researchers in the area of machine and speech translation in order to discuss the challenges brought up by the content of social media, such as Facebook, Twitter, YouTube videos and podcasts. --------------- Call for Papers ---------------
We expect participants to submit discussion papers that argue for new research and techniques necessary for dealing with machine and speech translation in the domain outlined above, as well as papers presenting results of related and potentially preliminary research that is breaking new ground. --------------- Important Dates ---------------
· Full Paper submission deadline: July 31st
· Acceptance/Rejection: August 25th · Camera Ready Paper: September 1st ·Workshop: November 1st
--------------- Organizing Committee ---------------
· Chairs: Satoshi Nakamura (NAIST, Japan) and Alex Waibel (KIT, Germany) · Program Chairs: Graham Neubig (NAIST, Japan), Sebastian Stüker (KIT, Germany), and Joy Ying Zhang (CMU-SV, USA) · Publicity Chair: Margit Rödder (KIT, Germany)
| ||||||||||||
3-3-7 | (2012-11-13) International Conference on Asian Language Processing 2012 (IALP 2012),Hanoi, Vietnam
International Conference on Asian Language Processing 2012 (IALP 2012) The International Conference on Asian Language Processing (IALP) is a series Over the years, IALP has developed into one of important anaual events on This year, the International Conference on Asian Language Processing 2012 Hanoi (Vietnamese: Hà Noi, 'River Interior') is the capital and We welcome you to Vietnam to experience the nature, history, and cultural in CONFERENCE TOPICS Paper submissions are invited on substantial, original and unpublished - Under-resourced language studies PAPER SUBMISSION Submissions must describe substantial, original, completed and unpublished All submissions must be electronic and in Portable Document Format (PDF) The official language of the conference is English. Papers submitted should Papers may be submitted until July 1, 2012, in PDF format via the START
IMPORTANT DATES Submission deadline Jul 1, 2012 MORE INFORMATION To get other details and the latest information about the conference, please Pham Thi Ngoc Yen and Deyi Xiong
| ||||||||||||
3-3-8 | (2012-11-21) Albayzin 2012 Language Recognition Evaluation, Madrid Spain Albayzin 2012 Language Recognition Evaluation The Albayzin 2012 Language Recognition Evaluation (Albayzin 2012 LRE) is supported by the Spanish Thematic Network on Speech Technology (RTTH) and organized by the Software Technologies Working Group (GTTS) of the University of the Basque Country, with the key collaboration of Niko Brümmer, from Agnitio Research, South Africa, for defining the evaluation criterion and coding the script used to measure system performance. The evaluation workshop will be part of IberSpeech 2012, to be held in Madrid, Spain from 21 to 23 November 2012. Registration Deadline: July 16th 2012 Procedure: Submit an e-mail to the organization contact: luisjavier.rodriguez@ehu.es, with copy to the Chairs of the Albayzin 2012 Evaluations: javier.gonzalez@uam.es and javier.tejedor@uam.es, providing the following information:
Data delivery Starting from June 15th 2012, and once registration data are validated, the training (108 hours of broadcast speech for 6 target languages) and development (around 2000 audio segments including 10 target languages and Out-Of-Set languages) datasets will be released via web (only to registered participants). Schedule
Contact Luis Javier Rodríguez Fuentes Software Technologies Working Group (GTTS) Department of Electricity and Electronics (ZTF-FCT) University of the Basque Country (UPV/EHU) Barrio Sarriena s/n 48940 Leioa - SPAIN
| ||||||||||||
3-3-9 | (2012-11-28) International Workshop on Spoken Dialog Systems (IWSDS 2012) Paris FInternational Workshop on Spoken Dialog Systems (IWSDS 2012) Towards a Natural Interaction with Robots, Knowbots and Smartphones. Paris, France, November 28-30, 2012 http://www.uni-ulm.de/en/in/iwsds2012 Second Announcement Following the success of IWSDS'2009 (Irsee, Germany), IWSDS'2010 (Gotemba Kogen Resort, Japan) and IWSDS'2011 (Granada, Spain),the Fourth International Workshop on Spoken Dialog Systems (IWSDS 2012) will be held in Paris (France) on November 28-30, 2012. The IWSDS Workshop series provides an international forum for the presentation of research and applications and for lively discussions among researchers as well as industrialists, with a special interest to the practical implementation of Spoken Dialog Systems in everyday applications. Scientific achievements in language processing now results in the development of successful applications such as IBM Watson, Evi, Apple Siri or Google Assistant for access to knowledge and interaction with smartphones, while the coming of domestic robots advocates for the development of powerful communication means with their human users and fellow robots. We therefore put this year workshop under the theme 'Towards a Natural Interaction with Robots, Knowbots and Smartphones', which covers: -Dialog for robot interaction (including ethics), -Dialog for Open Domain knowledge access, -Dialog for interacting with smartphones, -Mediated dialog (including multilingual dialog involving Speech Translation), -Dialog quality evaluation. We would also like to encourage the discussion of common issues of theories, applications, evaluation, limitations, general tools and techniques, and therefore also invite the submission of original papers in any related area, including but not limited to: -Speech recognition and semantic analysis, -Dialog management, Adaptive dialog modeling, -Recognition of emotions from speech, gestures, facial expressions and physiological data, -Emotional and interactional dynamic profile of the speaker during dialog, User modeling, -Planning and reasoning capabilities for coordination and conflict description, -Conflict resolution in complex multi-level decisions, -Multi-modality such as graphics, gesture and speech for input and output, -Fusion, fission and information management, Learning and adaptability -Visual processing and recognition for advanced human-computer interaction, -Spoken Dialog databases and corpora, including methodologies and ethics, -Objective and subjective Spoken Dialog evaluation methodologies, strategies and paradigms, -Spoken Dialog prototypes and products, etc. We particularly welcome papers that can be illustrated by a demonstration, and we will organize the conference in order to best accommodate these papers, whatever their category. *PAPER SUBMISSION* We distinguish between the following categories of submissions: Long Research Papers are reserved for reports on mature research results. The expected length of a long paper should be in the range of 8-12 pages. Short Research Papers should not exceed 6 pages in total. Authors may choose this category if they wish to report on smaller case studies or ongoing but interesting and original research efforts Demo - System Papers: Authors who wish to demonstrate their system may choose this category and provide a description of their system and demo. System papers should not exceed 6 pages in total. As usual, it is planned that a selection of accepted papers will be published in a book by Springer following the conference. *IMPORTANT DATES* Deadline for submission: July 16, 2012 Notification of acceptance: September 15, 2012 Deadline for final submission of accepted paper: October 8, 2012 Deadline for Early Bird registration: October 8, 2012 Final program available online: November 5, 2012 Workshop: November 28-30, 2012 VENUE: IWSDS 2012 will be held as a two-day residential seminar in the wonderful Castle of Ermenonville near Paris, France, where all attendees will be accommodated. IWSDS Steering Committee: Gary Geunbae Lee(POSTECH, Pohang, Korea), Ramón López-Cózar (Univ. of Granada, Spain), Joseph Mariani (LIMSI and IMMI-CNRS, Orsay, France), Wolfgang Minker (Ulm Univ., Germany), Satoshi Nakamura (Nara Institute of Science and Technology, Japan) IWSDS 2012 Program Committee: Joseph Mariani (LIMSI & IMMI-CNRS, Chair), Laurence Devillers (LIMSI-CNRS & Univ. Paris-Sorbonne 4), Martine Garnier-Rizet (IMMI-CNRS), Sophie Rosset (LIMSI-CNRS) Organization Committee: Martine Garnier-Rizet (Chair), Lynn Barreteau, Joseph Mariani (IMMI-CNRS) Supporting organizations (to be completed): IMMI-CNRS and LIMSI-CNRS (France), Postech (Korea), University of Granada (Spain), Nara Institute of Science and Technology and NICT (Japan), Ulm University (Germany) Scientific Committee: To be announced Sponsors: To be announced Please contact iwsds2012@immi-labs.org <mailto:iwsds2012@immi-labs.org> or visit http://www.uni-ulm.de/en/in/iwsds2012 to get more information.
| ||||||||||||
3-3-10 | (2012-12-02) SLT 2012: 4-th IEEE Workshop on Spoken Language Technology, Miami Florida, December 2-5, 2012 SLT 2012: IEEE Workshop on Spoken Language Technology, Miami Florida, December 2-5, 2012 CALL FOR PAPERS The Fourth IEEE Workshop on Spoken Language Technology (SLT) will be held between December 2-5, 2012 in Miami, FL. The goal of this workshop is to allow the speech/language processing community to share and present recent advances in various areas of spoken language technology. SLT will include oral and poster presentations. In addition, there will be three keynote addresses by well-known experts on topics such as machine learning and speech/language processing. The workshop will also include free pre-workshop tutorials on introduction or recent advances in spoken language technology. Submission of papers in all areas of spoken language technology is encouraged, with emphasis on the following topics:
Important Deadlines
Submission Procedure Prospective authors are invited to submit full-length, 4-6 page papers, including figures and references, to the SLT 2012 website. All papers will be handled and reviewed electronically. Please note that the submission dates for papers are strict deadlines.
| ||||||||||||
3-3-11 | (2012-12-03) UNSW Forensic Speech Science Conference, Sydney, 2012UNSW Forensic Speech Science Conference, Sydney, 2012
| ||||||||||||
3-3-12 | (2012-12-06) 9th International Workshop on Spoken Language Translation, Hong Kong, China The 9th International Workshop on Spoken Language Translation will take Details can be found on the conference website http://iwslt2012.org/
| ||||||||||||
3-3-13 | (2012-12-15) CfP 3rd Workshop on 'Cognitive Aspects of the Lexicon' (CogALex), Mumbai, India 2nd Call for Papers
| ||||||||||||
3-3-14 | (2013-01-17) Tralogy II: The quest for meaning: where are our weak points and what do we need?, CNRS, Paris
Tralogy II: Human and Machine Translation. The quest for meaning: where are our weak points and what do we need?
| ||||||||||||
3-3-15 | (2013-02-11) International Conference on Bio-inspired Systems and Signal Processing BIOSIGNALS, BarcelonaCALL FOR PAPERS
International Conference on Bio-inspired Systems and Signal Processing BIOSIGNALS
website: http://www.biosignals.biostec.org February 11 - 14, 2013 Barcelona, Spain In
Collaboration with: UVIC Sponsored by: INSTICC INSTICC is Member of: WfMC
IMPORTANT DATES: Regular Paper Submission: September 3, 2012 (deadline extended)
Authors Notification (regular papers): October 23, 2012
Final Regular Paper Submission and Registration: November 13, 2012
The conference will be sponsored by the Institute for Systems and Technologies of Information,
Control and Communication (INSTICC) and held In Collaboration with the Universitat
de Vic (UVIC). INSTICC is Member of the Workflow Management Coalition (WfMC).
We would like to highlight the presence of the following keynote speakers:
- Pedro Gomez Vilda, Universidad Politecnica de Madrid, Spain
- Christian Jutten, GIPSA-lab, France
- Adam Kampff, Champalimaud Foundation, Portugal
- Richard Reilly, Trinity College Dublin, Ireland
- Vladimir Devyatkov, Bauman Moscow State Technical University, Russian Federation
Details of which can be found on the Keynotes webpage available at:
http://www.biostec.org/KeynoteSpeakers.aspx
Submitted papers will be subject to a double-blind review process. All accepted papers
(full, short and posters) will be published in the conference proceedings, under an ISBN
reference, on paper and on CD-ROM support. JHPZ A short list of presented papers
will be selected so that revised and extended versions of these papers will be published by Springer-Verlag in a CCIS Series book. The proceedings will be submitted for indexation
by Thomson Reuters Conference Proceedings Citation Index (ISI), INSPEC, DBLP and EI (Elsevier Index). All papers presented at the conference venue will be available at the
SciTePress Digital Library (http://www.scitepress.org/DigitalLibrary/). SciTePress is member of CrossRef (http://www.crossref.org/). We also would like to highlight the possibility to submit to the following Special Session:
- 3rd International Special Session on Multivariable Processing for
Biometric Systems - MPBS (http://www.biosignals.biostec.org/MPBS.aspx)
Please check further details at the BIOSIGNALS conference website
(http://www.biosignals.biostec.org).
| ||||||||||||
3-3-16 | (2013-06-01) 2nd CHiME Speech Separation and Recognition Challenge, Vancouver, Canada 2nd CHiME Speech Separation and Recognition Challenge
| ||||||||||||
3-3-17 | (2013-06-18) Urgent Cf Participation NTCIR-10 IR for Spoken Documents Task (SpokenDoc-2)Call for Participation NTCIR-10 IR for Spoken Documents Task (SpokenDoc-2) http://www.cl.ics.tut.ac.jp/~sdpwg/index.php?ntcir10 == INTRODUCTION The growth of the internet and the decrease of the storage costs are resulting in the rapid increase of multimedia contents today. For retrieving these contents, available text-based tag information is limited. Spoken Document Retrieval (SDR) is a promising technology for retrieving these contents using the speech data included in them. Following the NTCIR-9 SpokenDoc task, we will continue to evaluate the SDR based on a realistic ASR condition, where the target documents are spontaneous speech data with high word error rate and high out-of-vocabulary rate. == TASK OVERVIEW The new speech data, the recordings of the first to sixth annual Spoken Document Processing Workshop, are going to be used as the target document in SpokenDoc-2. The larger speech data, spoken lectures in Corpus of Spontaneous Japanese (CSJ), are also used as in the last SpokenDoc-1. The task organizers are going to provide reference automatic transcriptions for these speech data. These enabled researchers interested in SDR, but without access to their own ASR system to participate in the tasks. They also enabled comparisons of the IR methods based on the same underlying ASR performance. Targeting these documents, two subtasks will be conducted. Spoken Term Detection: Within spoken documents, find the occurrence positions of a queried term. The evaluation should be conducted by both the efficiency (search time) and the effectiveness (precision and recall). Spoken Content Retrieval: Among spoken documents, find the segments including the relevant information related to the query, where a segment is either a document (resulting in document retrieval task) or a passage (passage retrieval task). This is like an ad-hoc text retrieval task, except that the target documents are speech data. == FOR MORE DETAILS Please visit http://www.cl.ics.tut.ac.jp/~sdpwg/index.php?ntcir10 A link to the NTCIR-10 task participants registration page is now available from this page. Please note that the registration deadline is Jun 30, 2012 (for all NTCIR-10 tasks). == ORGANIZERS Kiyoaki Aikawa (Tokyo University of Technology) Tomoyosi Akiba (Toyohashi University of Technology) Xinhui Hu (National Institute of Information and Communications Technology) Yoshiaki Itoh (Iwate Iwate Prefectural University) Tatsuya Kawahara (Kyoto University) Seiichi Nakagawa (Toyohashi University of Technology) Hiroaki Nanjo (Ryukoku University) Hiromitsu Nishizaki (University of Yamanashi) Yoichi Yamashita Ritsumeikan University) If you have any questions, please send e-mails to the task organizers mailing list: ntcadm-spokendoc2@nlp.cs.tut.ac.jp ======================================================================
| ||||||||||||
3-3-18 | (2013-07-03) CorpORA and Tools in Linguistics, Languages and Speech, Strasbourg, France Colloque organisé par l’Unité de Recherche 1339 Linguistique, Langues, Parole (LiLPa) Université de Strasbourg – Unistra 3 – 5 juillet 2013 Strasbourg - France CorpORA and Tools in Linguistics, Languages and Speech: Status, Uses and Misuse Conference organised by the Research Unit 1339 Linguistics, Languages and Speech (LiLPa) University of Strasbourg – UNISTRA 3 – 5 July 2013 Strasbourg - France
| ||||||||||||
3-3-19 | (2013-10-23) 5ème Journées de Phonétique Clinique (JPhC) , Liège (Belgique). 5ème Journées de Phonétique Clinique (JPhC) qui auront lieu à Liège les 23, 24, 25 octobre 2013.
Ces journées ont vu le jour à Paris en 2005 (www.cavi.univ-paris3.fr/ilpga/JPC-2005/). En 2007, elles se sont déroulées à Grenoble, en 2009 à Aix-en-Provence (aune.lpl.univ-aix.fr/~jpc3/) et en 2011 à Strasbourg (journees-phonetique-clinique.u-strasbg.fr/). Elles ont lieu tous les deux ans. L’année 2013 sera Liégeoise (Belgique). En effet, elles seront organisées par le service de Logopédie de la Voix de l'Université de Liège de psychologie: cognition et comportement) en étroite collaboration avec le Laboratoires d'Images, Signaux et Dispositifs de Télécommunications de l’Université Libre de Bruxelles.
La phonétique réunit principalement des chercheurs, enseignants-chercheurs, ingénieurs, médecins et orthophoniste / logopèdes ; différentes corps de métiers complémentaires qui poursuivent le même objectif : une meilleure connaissance des processus d'acquisition, de développement et de dégénérescence du langage, de la parole et de la voix. Cette approche interdisciplinaire vise à optimiser les connaissances fondamentales relatives à la communication parlée, dans le but de mieux comprendre, évaluer, et remédier aux troubles de la parole et de la voix chez le sujet pathologique.
Dans ce contexte, cette série de colloques internationaux sur la production et la perception de la parole, chez le sujet pathologique, représente une opportunité pour des professionnels, des chercheurs confirmés etdes jeu nes chercheurs de formations différentes de présenter des résultats expérimentaux nouveaux et d’échanger des idées de diverses perspectives. Les communications porteront sur les études de la parole et de la voix pathologiques, chez l’adulte et chez l’enfant.
Nous espérons vous voir nombreux à ces 5ème Journées de Phonétique Clinique. Vous trouverez plus d’informations en visitant le site à l’adresse suivante : https://w3.fapse.ulg.ac.be/conferences/JPhC5/index.php
| ||||||||||||
3-3-20 | (2014) Speech Prosody 2014 Dublin.Speech Prosody 2014 in Dublin.
| ||||||||||||
3-3-21 | Call for Participation MediaEval 2012 Multimedia Benchmark Evaluation Call for Participation
| ||||||||||||
3-3-22 | CfProposals 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017) 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP 2017) Sponsored By The IEEE Signal Processing Society
This Call for Proposal is distributed on behalf of IEEE Signal Processing Society Conference Board for the 42nd IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) to be held in March or April of 2017. ICASSP is the world’s largest and most comprehensive technical conference focused on signal processing theory and applications. The series is sponsored by the IEEE Signal Processing Society and has been held annually since 1976. The conference features world-class speakers, tutorials, exhibits, and over 120 lecture and poster sessions. ICASSP is a cooperative effort of the IEEE Signal Processing Society Technical Committees:
The conference organizing team is advised to incorporate into their proposal the following items.
Submission of Proposal For additional guidelines for ICASSP please contact Lisa Schwarzbek, Manager, Conference Services (l.schwarzbek@ieee.org). Proposal Presentation
|