ISCApad #160 |
Saturday, October 08, 2011 by Chris Wellekens |
5-1-1 | Alain Marchal, Christian Cave, L'imagerie medicale pour l'etude de la parole Alain Marchal, Christian Cave Eds Hermes Lavoisier 99 euros • 304 pages • 16 x 24 • 2009 • ISBN : 978-2-7462-2235-9 Du miroir laryngé à la vidéofibroscopie actuelle, de la prise d'empreintes statiques à la palatographie dynamique, des débuts de la radiographie jusqu'à l'imagerie par résonance magnétique ou la magnétoencéphalographie, cet ouvrage passe en revue les différentes techniques d'imagerie utilisées pour étudier la parole tant du point de vue de la production que de celui de la perception. Les avantages et inconvénients ainsi que les limites de chaque technique sont passés en revue, tout en présentant les principaux résultats acquis avec chacune d'entre elles ainsi que leurs perspectives d'évolution. Écrit par des spécialistes soucieux d'être accessibles à un large public, cet ouvrage s'adresse à tous ceux qui étudient ou abordent la parole dans leurs activités professionnelles comme les phoniatres, ORL, orthophonistes et bien sûr les phonéticiens et les linguistes.
| ||
5-1-2 | Christoph Draxler, Korpusbasierte Sprachverarbeitung Author: Christoph Draxler
| ||
5-1-3 | Robert M. Gray, Linear Predictive Coding and the Internet Protocol Linear Predictive Coding and the Internet Protocol, by Robert M. Gray, a special edition hardback book from Foundations and Trends in Signal Processing (FnT SP). The book brings together two forthcoming issues of FnT SP, the first being a survey of LPC, the second a unique history of realtime digital speech on packet networks.
Volume 3, Issue 3 A Survey of Linear Predictive Coding: Part 1 of LPC and the IP By Robert M. Gray (Stanford University) http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000029
Volume 3, Issue 4
A History of Realtime Digital Speech on Packet Networks: Part 2 of LPC and the IP By Robert M. Gray (Stanford University) http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000036
The links above will take you to the article abstracts.
| ||
5-1-4 | M. Embarki and M. Ennaji, Modern Trends in Arabic Dialectology Modern Trends in Arabic Dialectology,
| ||
5-1-5 | Gokhan Tur , R De Mori, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Title: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Editors: Gokhan Tur and Renato De Mori Web: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470688246.html Brief Description (please use as you see fit): Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include: Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks. Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas. Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations. This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.
| ||
5-1-6 | Jody Kreiman, Diana Van Lancker Sidtis ,Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception
| ||
5-1-7 | G. Nick Clements and Rachid Ridouane, Where Do Phonological Features Come From?
Where Do Phonological Features Come From?
Edited by G. Nick Clements and Rachid Ridouane CNRS & Sorbonne-Nouvelle This volume offers a timely reconsideration of the function, content, and origin of phonological features, in a set of papers that is theoretically diverse yet thematically strongly coherent. Most of the papers were originally presented at the International Conference 'Where Do Features Come From?' held at the Sorbonne University, Paris, October 4-5, 2007. Several invited papers are included as well. The articles discuss issues concerning the mental status of distinctive features, their role in speech production and perception, the relation they bear to measurable physical properties in the articulatory and acoustic/auditory domains, and their role in language development. Multiple disciplinary perspectives are explored, including those of general linguistics, phonetic and speech sciences, and language acquisition. The larger goal was to address current issues in feature theory and to take a step towards synthesizing recent advances in order to present a current 'state of the art' of the field.
| ||
5-1-8 | Dorothea Kolossa and Reinhold Haeb-Umbach: Robust Speech Recognition of Uncertain or Missing DataTitle: Robust Speech Recognition of Uncertain or Missing Data Editors: Dorothea Kolossa and Reinhold Haeb-Umbach Publisher: Springer Year: 2011 ISBN 978-3-642-21316-8 Link: http://www.springer.com/engineering/signals/book/978-3-642-21316-8?detailsPage=authorsAndEditors Automatic speech recognition suffers from a lack of robustness with respect to noise, reverberation and interfering speech. The growing field of speech recognition in the presence of missing or uncertain input data seeks to ameliorate those problems by using not only a preprocessed speech signal but also an estimate of its reliability to selectively focus on those segments and features that are most reliable for recognition. This book presents the state of the art in recognition in the presence of uncertainty, offering examples that utilize uncertainty information for noise robustness, reverberation robustness, simultaneous recognition of multiple speech signals, and audiovisual speech recognition. The book is appropriate for scientists and researchers in the field of speech recognition who will find an overview of the state of the art in robust speech recognition, professionals working in speech recognition who will find strategies for improving recognition results in various conditions of mismatch, and lecturers of advanced courses on speech processing or speech recognition who will find a reference and a comprehensive introduction to the field. The book assumes an understanding of the fundamentals of speech recognition using Hidden Markov Models.
| ||
5-1-9 | Mohamed Embarki et Christelle Dodane: La coarticulation
LA COARTICULATION Mohamed Embarki et Christelle Dodane La parole est faite de gestes articulatoires complexes qui se chevauchent dans l’espace et dans le temps. Ces chevauchements, conceptualisés par le terme coarticulation, n’épargnent aucun articulateur. Ils sont repérables dans les mouvements de la mâchoire, des lèvres, de la langue, du voile du palais et des cordesvocales. La coarticulation est aussi attendue par l’auditeur, les segments coarticulés sont mieux perçus. Elle intervient dans les processus cognitifs et linguistiques d’encodage et de décodage de la parole. Bien plus qu’un simple processus, la coarticulation est un domaine de recherche structuré avec des concepts et des modèles propres. Cet ouvrage collectif réunit des contributions inédites de chercheurs internationaux abordant lacoarticulation des points de vue moteur, acoustique, perceptif et linguistique. C’est le premier ouvrage publié en langue française sur cette question et le premier à l’explorer dans différentes langues.
Collection : Langue & Parole, L'Harmattan ISBN : 978-2-296-55503-7 • 25 € • 260 pages
Mohamed Embarki Christelle Dodane
| ||
5-1-10 | Ben Gold, Nelson Morgan, Dan Ellis :Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital]Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital] Ben Gold, Nelson Morgan, Dan Ellishttp://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/product-description/1118142888
|
5-2-1 | ELRA - Language Resources Catalogue - Update (2011-09) *****************************************************************
| ||||||||||||||||||||||||||
5-2-2 | ELRA - Language Resources Catalogue - Special Offer *****************************************************************
| ||||||||||||||||||||||||||
5-2-3 | LDC Newsletter (September 2011) In this newsletter: - Cataloging the communication of Asian Elephants - New publications: - 2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1 - Cataloging the communication of Asian Elephants LDC distributes a broad selection of databases, the majority of which are used for human language research and technology development. Our corpus catalog also includes the vocalizations of other animal species. We'd like to highlight the intriguing work behind one such animal communication corpus, Asian Elephant Vocalizations LDC2010S05. New Publications (1) 2006 NIST/USF Evaluation Resources for the VACE Program - Meeting Data Test Set Part 1 was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately fifteen hours of meeting room video data collected in 2005 and 2006 and annotated for the VACE (Video Analysis and Content Extraction) 2006 face and person tracking tasks. The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences. Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. In 2006, the VACE program and the European Union's Computers in the Human Interaction Loop (CHIL) collaborated to hold the CLassification of Events, Activities and Relationships (CLEAR) Evaluation. This was an international effort to evaluate systems designed to analyze people, their identities, activities, interactions and relationships in human-human interaction scenarios, as well as related scenarios. The VACE program contributed the evaluation infrastructure (e.g., data, scoring, tools) for a specific set of tasks, and the CHIL consortium, coordinated by the Karlsruhe Institute of Technology, contributed a separate set of evaluation infrastructure. The meeting room data used for the 2006 test set was collected by the following sites in 2005 and 2006: Carnegie Mellon University (USA), University of Edinburgh (Scotland), IDIAP Research Institute (Switzerland), NIST (USA), Netherlands Organization for Applied Scientific Research (Netherlands) and Virginia Polytechnic Institute and State University (USA). Each site had its own independent camera setup, illuminations, viewpoints, people and topics. Most of the datasets included High-Definition (HD) recordings, but those were subsequently formatted to MPEG-2 for the evaluation. *
The 2008 evaluation was distinguished from prior evaluations, in particular those in 2005 and 2006, by including not only conversational telephone speech data but also conversational speech data of comparable duration recorded over a microphone channel involving an interview scenario. The speech data in this release was collected in 2007 by LDC at its Human Subjects Data Collection Laboratories in Philadelphia and by the International Computer Science Institute (ICSI) at the University of California, Berkeley. This collection was part of the Mixer 5 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. Mixer participants were native English speakers and bilingual English speakers. The telephone speech in this corpus is predominately English; all interview segments are in English. Telephone speech represents approximately 523 hours of the data, and microphone speech represents the other 427 hours. The telephone speech segments include summed-channel excerpts in the range of 5 minutes from longer original conversations. The interview material includes single channel conversation interview segments of at least 8 minutes from a longer interview session. English language transcripts were produced using an automatic speech recognition (ASR) system. 2008 NIST Speaker Recognition Evaluation Training Set Part 2 is distributed on 7 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $2000. * (3) French Gigaword Third Edition is a comprehensive archive of newswire text data that has been acquired over several years by LDC. This third edition updates French Gigaword Second Edition (LDC2009T28) and adds material collected from January 1, 2009 through December 31, 2010. The two distinct international sources of French newswire in this edition, and the time spans of collection covered for each, are as follows: Agence France-Presse (afp_fre) May 1994 - Dec. 2010 Associated Press French Service (apw_fre) Nov. 1994 - Dec. 2010 All text data are presented in SGML form, using a very simple, minimal markup structure; all text consists of printable ASCII, white space, and printable code points in the 'Latin1 Supplement' character table, as defined by the Unicode Standard (ISO 10646) for the 'accented' characters used in French. The Supplement/accented characters are presented in UTF-8 encoding. The overall totals for each source are summarized below. Note that the 'Totl-MB' numbers show the amount of data when the files are uncompressed (i.e. approximately 15 gigabytes, total); the 'Gzip-MB' column shows totals for compressed file sizes as stored on the DVD-ROM; the 'K-wrds' numbers are simply the number of white space-separated tokens (of all types) after all SGML tags are eliminated.
French Gigaword Third Edition is distributed on 1 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$4500.
| ||||||||||||||||||||||||||
5-2-4 | Speechocean October 2011 update SpeechOcean China also has about 200+ large language resources and some of databases can be freely used to our members for academic research purpose. As a ISCA member, we will be also glad to share these databases to other ISCA members, Speechocean - Language Resource Catalogue - New Released (2011-10) Speechocean, as a global provider of language resources and data services, has more than 200 large-scale databases available 80+ languages and accents covering the fields of Text to Speech, Automatic Speech Recognition, Text, Machine Translation, Web Search, Videos, Images etc. Speechocean is glad to announce that more Speech Resources has been released: Turkish speech recognition Database (Desktop) --- 201 speakers This Turkish desktop speech recognition database was collected by Speechocean’s project team in Turkey. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections in 30 languages presently. All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/789.html
Turkish speech recognition Database (In-car) --- 316 speakers This Turkish in-car speech recognition database was collected by Speechocean’s project team in Turkey. This database is one of our databases of Speech Data---Car (SDC) Project, which contains the database collections in more than 30 languages presently. The script was specially designed to provide material for both training and testing of many classes of speech recognizers, and contain 320 utterances covering 15 categories and 35 sub-categories for each speaker. Each speaker was recorded under two environments in three variations (Parked, City Driving and Highway driving) with kinds of recording conditions such as motor running, fan on/off, window up/down and etc. A total of 320 utterances were recorded for each speaker under two environments (160 utterances and spontaneous sentences per environment).
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/793.html
France French speech recognition Database (Desktop) --- 200 speakers This France French desktop speech recognition database was collected by Speechocean’s project team in France. This database is one of our databases of Speech Data ----Desktop Project (SDD)which contains the database collections in 30 languages presently. All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/796.html
Spain Spanish speech recognition Database (Desktop) --- 210 speakers This Spain Spanish desktop speech recognition database was collected by Speechocean’s project team in Spain. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections in 30 languages presently.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/795.html
UK English speech recognition Database (Desktop) --- 200 speakers This UK English desktop speech recognition database was collected by Speechocean’s project team in UK. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections in 30 languages presently.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/792.html
Portugal Portuguese speech recognition Database (Desktop) --- 200 speakers This Portugal Portuguese desktop speech recognition database was collected by Speechocean’s project team in Portugal. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections in 30 languages presently.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/791.html
Swedish speech recognition Database (Desktop) --- 200 speakers This Swedish desktop speech recognition database was collected by Speechocean’s project team in Sweden. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections in 30 languages presently.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/790.html
Canadian French Desktop speech recognition Corpus (200 speakers) was launched in Canada Based on our client's urgent demands, the Canadian French desktop speech recognition database (200 speakers) was collected by Speechocean’s project team in Canada. This database belongs to Speechocean's Desktop Speech Data Project.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/733.html
Chinese Mandarin In-car Speech Recognition Database was Successful Released! Chinese Mandarin In-car Speech Recognition Database was successfully released with the catalogue serial number of King-ASR-122 in our Catalogue. This database was made for the tuning and testing purpose of speech recognition system for car-using. It belongs to SPC’s Multi-language In-car Speech Data Project. The script was specially designed to provide material for both training and testing of many classes of speech recognizers which contain 320 utterances covering 15 categories and 35 sub-categories for each speaker.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/781.html
The American Spanish Mobile speech Recognition database was Successful Released!
The American Spanish Mobile speech Recognition database was successfully released with the catalogue serial number of King-ASR-119. This database was made for the tuning and testing purpose of speech recognition system for IVR / mobile. It belongs to SPC’s Multi-language Mobile Speech Data Project.
All audio files are manually transcribed and labelled. A pronunciation lexicon with a phonetic transcription in SAMPA is also included. For more information, please see the technical document at the following link: http://www.speechocean.com/en-ASR-Corpora/779.html
Visit our on-line Catalogue: http://www.speechocean.com/en-Product-Catalogue/Index.html For more information about our Database and Services please visit our website www.Speechocen.com If you have any inquiry regarding our databases and service please feel free to contact us: XiangFeng Cheng mailto:Chengxianfeng@speechocean.com Marta Gherardi mailto:Marta@speechocean.com
|
5-3-1 | Matlab toolbox for glottal analysis I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:
http://tcts.fpms.ac.be/~drugman/Toolbox/
This toolbox includes the following modules:
- Pitch and voiced-unvoiced decision estimation - Speech polarity detection - Glottal Closure Instant determination - Glottal flow estimation
By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”: http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf
where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.
Hoping that this might be useful to you, and to see you soon,
Thomas Drugman
|