ISCApad #164 |
Saturday, February 11, 2012 by Chris Wellekens |
5-1-1 | Robert M. Gray, Linear Predictive Coding and the Internet Protocol Linear Predictive Coding and the Internet Protocol, by Robert M. Gray, a special edition hardback book from Foundations and Trends in Signal Processing (FnT SP). The book brings together two forthcoming issues of FnT SP, the first being a survey of LPC, the second a unique history of realtime digital speech on packet networks.
Volume 3, Issue 3 A Survey of Linear Predictive Coding: Part 1 of LPC and the IP By Robert M. Gray (Stanford University) http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000029
Volume 3, Issue 4
A History of Realtime Digital Speech on Packet Networks: Part 2 of LPC and the IP By Robert M. Gray (Stanford University) http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000036
The links above will take you to the article abstracts.
| |||||
5-1-2 | M. Embarki and M. Ennaji, Modern Trends in Arabic Dialectology Modern Trends in Arabic Dialectology,
| |||||
5-1-3 | Gokhan Tur , R De Mori, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Title: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Editors: Gokhan Tur and Renato De Mori Web: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470688246.html Brief Description (please use as you see fit): Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include: Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks. Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas. Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations. This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.
| |||||
5-1-4 | Jody Kreiman, Diana Van Lancker Sidtis ,Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception
| |||||
5-1-5 | G. Nick Clements and Rachid Ridouane, Where Do Phonological Features Come From?
Where Do Phonological Features Come From?
Edited by G. Nick Clements and Rachid Ridouane CNRS & Sorbonne-Nouvelle This volume offers a timely reconsideration of the function, content, and origin of phonological features, in a set of papers that is theoretically diverse yet thematically strongly coherent. Most of the papers were originally presented at the International Conference 'Where Do Features Come From?' held at the Sorbonne University, Paris, October 4-5, 2007. Several invited papers are included as well. The articles discuss issues concerning the mental status of distinctive features, their role in speech production and perception, the relation they bear to measurable physical properties in the articulatory and acoustic/auditory domains, and their role in language development. Multiple disciplinary perspectives are explored, including those of general linguistics, phonetic and speech sciences, and language acquisition. The larger goal was to address current issues in feature theory and to take a step towards synthesizing recent advances in order to present a current 'state of the art' of the field.
| |||||
5-1-6 | Dorothea Kolossa and Reinhold Haeb-Umbach: Robust Speech Recognition of Uncertain or Missing DataTitle: Robust Speech Recognition of Uncertain or Missing Data Editors: Dorothea Kolossa and Reinhold Haeb-Umbach Publisher: Springer Year: 2011 ISBN 978-3-642-21316-8 Link: http://www.springer.com/engineering/signals/book/978-3-642-21316-8?detailsPage=authorsAndEditors Automatic speech recognition suffers from a lack of robustness with respect to noise, reverberation and interfering speech. The growing field of speech recognition in the presence of missing or uncertain input data seeks to ameliorate those problems by using not only a preprocessed speech signal but also an estimate of its reliability to selectively focus on those segments and features that are most reliable for recognition. This book presents the state of the art in recognition in the presence of uncertainty, offering examples that utilize uncertainty information for noise robustness, reverberation robustness, simultaneous recognition of multiple speech signals, and audiovisual speech recognition. The book is appropriate for scientists and researchers in the field of speech recognition who will find an overview of the state of the art in robust speech recognition, professionals working in speech recognition who will find strategies for improving recognition results in various conditions of mismatch, and lecturers of advanced courses on speech processing or speech recognition who will find a reference and a comprehensive introduction to the field. The book assumes an understanding of the fundamentals of speech recognition using Hidden Markov Models.
| |||||
5-1-7 | Mohamed Embarki et Christelle Dodane: La coarticulation
LA COARTICULATION Mohamed Embarki et Christelle Dodane La parole est faite de gestes articulatoires complexes qui se chevauchent dans l’espace et dans le temps. Ces chevauchements, conceptualisés par le terme coarticulation, n’épargnent aucun articulateur. Ils sont repérables dans les mouvements de la mâchoire, des lèvres, de la langue, du voile du palais et des cordesvocales. La coarticulation est aussi attendue par l’auditeur, les segments coarticulés sont mieux perçus. Elle intervient dans les processus cognitifs et linguistiques d’encodage et de décodage de la parole. Bien plus qu’un simple processus, la coarticulation est un domaine de recherche structuré avec des concepts et des modèles propres. Cet ouvrage collectif réunit des contributions inédites de chercheurs internationaux abordant lacoarticulation des points de vue moteur, acoustique, perceptif et linguistique. C’est le premier ouvrage publié en langue française sur cette question et le premier à l’explorer dans différentes langues.
Collection : Langue & Parole, L'Harmattan ISBN : 978-2-296-55503-7 • 25 € • 260 pages
Mohamed Embarki Christelle Dodane
| |||||
5-1-8 | Ben Gold, Nelson Morgan, Dan Ellis :Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital]Speech and Audio Signal Processing: Processing and Perception of Speech and Music [2nd edition] Ben Gold, Nelson Morgan, Dan EllisDigital copy: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/product-description/1118142888 Hardcopy available: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/0470195363/ref=sr_1_1?s=books&ie=UTF8&qid=1319142964&sr=1-1
| |||||
5-1-9 | Video Proceedings ERMITES 2011Actes vidéo des journées ERMITES 2011 'Décomposition Parcimonieuse, Contraction et Structuration pour l'Analyse de Scènes', sont en ligne sur : http://glotin.univ-tln.fr/ERMITES11 On y retrouve (en .mpg) la vingtaine d'heure des conférences de : Y. Bengio, Montréal «Apprentissage Non-Supervisé de Représentations Profondes » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_1sur4.mp4 ... S. Mallat, Paris « Scattering & Matching Pursuit for Acoustic Sources Separation » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_1sur3.mp4 ... J.-P. Haton, Nancy « Analyse de Scène et Reconnaissance Stochastique de la Parole » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_1sur4.mp4 ... M. Kowalski, Paris « Sparsity and structure for audio signal: a *-lasso therapy » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_1sur5.mp4 ... O. Adam, Paris « Estimation de Densité de Population de Baleines par Analyse de leurs Chants » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Adam.mp4 X. Halkias, New-York « Detection and Tracking of Dolphin Vocalizations » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Halkias.mp4 J. Razik, Toulon « Sparse coding : from speech to whales » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Razik.mp4 H. Glotin, Toulon « Suivi & reconstruction du comportement de cétacés par acoustique passive » ps : ERMITES 2012 portera sur la vision (Y. Lecun, Y. Thorpe, P. Courrieu, M Perreira, M. Van Gerven,...)
| |||||
5-1-10 | Zeki Majeed Hassan and Barry Heselwood (Eds): Instrumental Studies in Arabic Phonetics Instrumental Studies in Arabic Phonetics
|
5-2-1 | Nominations for the Antonio Zampoli Prize (ELRA) The ELRA Board has created a prize to honour the memory of its first President, Professor Antonio Zampolli, a pioneer and visionary scientist
| |||||
5-2-2 | ELRA - Language Resources Catalogue - Update (2012-01) *****************************************************************
| |||||
5-2-3 | LDC Newsletter (January 2012) In this newsletter: - LDC Celebrates its 20th Anniversary! - - 2012 LDC Survey – Be on the Lookout! - - Membership Discounts for MY 2012 Still Available - New publications: LDC2012S01
LDC Celebrates its 20th Anniversary! 2012 marks LDC’s 20th Anniversary year – officially on April 15 – but this is cause for a yearlong celebration! From our founding in 1992 as a data repository and language resource distribution center, our online catalog has grown to include over 500 databases in 60 languages that have been licensed by over 3000 organizations from 80 different nations. This data has been made available through donations, funded projects at LDC or elsewhere, community initiatives, and from LDC resources, an indication of the collective strength of this consortium. And, LDC has evolved from an organization that shares language resources to one that also is at the forefront of language technology research that includes the development of new data resources, software tools, and standards and best practices.
2012 LDC Survey – Be on the Lookout! It’s been four years since our last survey of LDC members and data licensees and we would like to again ask you to share your views on LDC and its language resources as well as your thoughts about data distribution in general and the impact of social media on language-related research and technology development. These topics are particularly timely as LDC enters its 20th anniversary year. The 2012 LDC Survey will be sent to every person and organization that licensed LDC data and/or joined LDC as a Member during the period from 2009 through 2011. Those who complete the survey on or before February 7, 2012 will make their organization eligible for a $500 benefit to be applied to any corpus or membership purchase in 2012. LDC will conduct a blind drawing and one lucky winner will be chosen from the pool of respondents. Many thanks for your continued support and for your participation in the 2012 Survey! Membership Discounts for MY 2012 Still Available
If you are considering joining for Membership Year 2012 (MY2012), there is still time to save on membership fees. Any organization which joins or renews membership for 2012 through Thursday, March 1, 2012, is entitled to a 5% discount on membership fees. Organizations which held membership for MY2011 can receive a 10% discount on fees provided they renew prior to March 1, 2012. For further information on pricing, please consult our Announcements page or contact LDC. New Publications
(1) 2006 NIST Speaker Recognition Evaluation Test Set Part 2 was developed by LDC and National Institute of Standards and Technology (NIST). It contains 568 hours of conversational telephone and microphone speech in English, Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai and Urdu and associated English transcripts used as test data in the NIST-sponsored 2006 Speaker Recognition Evaluation (SRE). The task of the 2006 SRE evaluation was speaker detection, that is, to determine whether a specified speaker is speaking during a given segment of conversational telephone speech. The task was divided into 15 distinct and separate tests involving one of five training conditions and one of four test conditions. Further information about the test conditions and additional documentation is available at the NIST web site for the 2006 SRE and within the 2006 SRE Evaluation Plan. LDC has previously published 2006 NIST Speaker Recognition Evaluation Training Set and 2006 NIST Speaker Recognition Evaluation Test Set Part 1. The speech data in this release was collected by LDC as part of the Mixer project, in particular Mixer Phases 1, 2 and 3. The Mixer project supports the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. The data is mostly English speech, but includes some speech in Arabic, Bengali, Chinese, Farsi, Hindi, Korean, Russian, Spanish, Thai and Urdu. The telephone speech segments are multi-channel data collected simultaneously from a number of auxiliary microphones. The files are organized into four types: two-channel excerpts of approximately 10 seconds, two-channel conversations of approximately 5 minutes, summed-channel conversations also of approximately 5 minutes and a two-channel conversation with the usual telephone speech replaced by auxiliary microphone data in the putative target speaker channel. The auxiliary microphone conversations are also of approximately five minutes in length. English language transcripts in .ctm format were produced using an automatic speech recognition (ASR) system. 2006 NIST Speaker Recognition Evaluation Test Set Part 2 is distributed on seven DVD-ROM. 2012 Subscription Members will automatically receive two copies of this corpus. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2000. * (2) TORGO Database of Dysarthric Articulation was developed by the University of Toronto's departments of Computer Science and Speech Language Pathology in collaboration with the Holland-Bloorview Kids Rehabilitation Hospital in Toronto, Canada. It contains approximately 23 hours of English speech data, accompanying transcripts and documentation from 8 speakers (5 males, 3 females) with cerebral palsy (CP) or amyotrophic lateral sclerosis (ALS) and from 7 speakers (4 males, 3 females) from a non-dysarthric control group. CP and ALS are examples of dysarthria which is caused by disruptions in the neuro-motor interface that distort motor commands to the vocal articulators, resulting in atypical and relatively unintelligible speech in most cases. The TORGO database is primarily a resource for developing advanced automatic speaker recognition (ASR) models suited to the needs of people with dysarthria, but it is also applicable to non-dysarthric speech. The inability of modern ASR to effectively understand dysarthric speech is a problem since the more general physical disabilities often associated with the condition can make other forms of computer input, such as computer keyboards or touch screens, difficult to use. The data consists of aligned acoustics and measured 3D articulatory features from the speakers carried out using the 3D AG500 electro-magnetic articulograph (EMA) system (Carstens Medizinelektronik GmbH, Lenglern, Germany) with fully-automated calibration. This system allows for 3D recordings of articulatory movements inside and outside the vocal tract, thus providing a detailed window on the nature and direction of speech-related activity. All subjects read text consisting of non-words, short words and restricted sentences from a 19-inch LCD screen. The restricted sentences included 162 sentences from the sentence intelligibility section of Assessment of intelligibility of dysarthric speech (Yorkston & Beukelman, 1981) and 460 sentences derived from the TIMIT database. The unrestricted sentences were elicited by asking participants to spontaneously describe 30 images in interesting situations taken randomly from Webber Photo Cards - Story Starters (Webber, 2005), designed to prompt students to tell or write a story. Data is organized by speaker and by the session in which each speaker recorded data. Each speaker's directory contains 'Session' directories which encapsulate data recorded in the respective visit and occasionally, a 'Notes' directory which can include Frenchay assessments (test for the measurement, description and diagnosis of dysarthria), notes about sessions (e.g., sensor errors), and other relevant notes. TORGO Database of Dysarthric Articulation is distributed on 4 DVD-ROM. 2012 Subscription Members will automatically receive two copies of this corpus. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$1200.
| |||||
5-2-4 | Speechocean January 2012 update Speechocean - Language Resource Catalogue - New Released (01- 2012) Speechocean, as a global provider of language resources and data services, has more than 200 large-scale databases available in 80+ languages and accents covering the fields of Text to Speech, Automatic Speech Recognition, Text, Machine Translation, Web Search, Videos, Images etc.
Speechocean is glad to announce that more Speech Resources has been released:
Chinese and English Mixing Speech Synthesis Database (Female) The Chinese Mandarin TTS Speech Corpus contains the read speech of a native Chinese Female professional broadcaster recorded in a studio with high SNR (>35dB) over two channels (AKG C4000B microphone and Electroglottography (EGG) sensor). All speech data are segmented and labeled on phone level. Pronunciation lexicon and pitch extract from EEG can also be provided based on demands.
France French Speech Recognition Corpus (desktop) – 50 speakers This France French desktop speech recognition database was collected by SpeechOcean in France. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
UK English Speech Recognition Corpus (desktop) – 50 speakers This UK English desktop speech recognition database was collected by SpeechOcean in England. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
US English Speech Recognition Corpus (desktop) – 50 speakers This US English desktop speech recognition database was collected by SpeechOcean in America. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (25 males, 25 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
Italian Speech Recognition Corpus (desktop) – 50 speakers This Italian desktop speech recognition database was collected by SpeechOcean in Italy. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (23 males, 27 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
For more information about our Database and Services please visit our website www.Speechocen.com or visit our on-line Catalogue at http://www.speechocean.com/en-Product-Catalogue/Index.html If you have any inquiry regarding our databases and service please feel free to contact us: Xianfeng Cheng mailto: Chengxianfeng@speechocean.com Marta Gherardi mailto: Marta@speechocean.com
| |||||
5-2-5 | ELDA Distribution Campaign 2011 ***************************************************************** ELDA Distribution Campaign 2011
*****************************************************************
|
5-3-1 | Matlab toolbox for glottal analysis I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:
http://tcts.fpms.ac.be/~drugman/Toolbox/
This toolbox includes the following modules:
- Pitch and voiced-unvoiced decision estimation - Speech polarity detection - Glottal Closure Instant determination - Glottal flow estimation
By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”: http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf
where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.
Hoping that this might be useful to you, and to see you soon,
Thomas Drugman
| |||||
5-3-2 | ROCme!: a free tool for audio corpora recording and management ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.
|