ISCA - International Speech
Communication Association


ISCApad Archive  »  2012  »  ISCApad #165  »  Resources

ISCApad #165

Saturday, March 10, 2012 by Chris Wellekens

5 Resources
5-1 Books
5-1-1Robert M. Gray, Linear Predictive Coding and the Internet Protocol

Linear Predictive Coding and the Internet Protocol, by Robert M. Gray, a special edition hardback book from Foundations and Trends in Signal Processing (FnT SP). The book brings together two forthcoming issues of FnT SP, the first being a survey of LPC, the second a unique history of realtime digital speech on packet networks.

 

Volume 3, Issue 3                                                                                                                                                                                                 

A Survey of Linear Predictive Coding: Part 1 of LPC and the IP                                                                                                                                  

By Robert M. Gray (Stanford University)                                                                                                                                                                  

http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000029                                                                                                             

 

Volume 3, Issue  4

 

A History of Realtime Digital Speech on Packet Networks: Part 2 of LPC and the IP                                                                                                     

By Robert M. Gray (Stanford University)                                                                                                                                                                  

http://www.nowpublishers.com/product.aspx?product=SIG&doi=2000000036                                                                                                            

 

The links above will take you to the article abstracts.

Top

5-1-2M. Embarki and M. Ennaji, Modern Trends in Arabic Dialectology

Modern Trends in Arabic Dialectology,
M. Embarki & M. Ennaji (eds.), Trenton (USA): The Red Sea Press.

Contents
Introduction
Mohamed Embarki and Moha Ennaji
vii
Part I: Theoretical and Hi storical Perspectives
and Methods in Arabic Di alectology
Chapter 1 : Arabic Dialects: A Discussion
Janet C. E. Watson p. 3
Chapter 2 : The Emergence of Western Arabic: A Likely Consequence of Creolization
Federrico Corriente p. 39
Chapter 3 : Acoustic Cues for the Classification of Arabic Dialects
Mohamed Embarki p. 47
Chapter 4 : Variation and Attitudes:
A Sociolinguistic Analysis of the Qaaf
Maher Bahloul p. 69

Part II : Eastern Arabic Di alects
Chapter 5 : Arabic Bedouin Dialects and their Classification
Judith Rosenhouse p. 97
Chapter 6 : Evolution of Expressive Structures in Egyptian Arabic
Amr Helmy Ibrahim p. 121
Chapter 7 : ?adram? Arabic Lexicon
Abdullah Hassan Al-Saqqaf p. 139

Part III: Western Arabic Di alects
Chapter 8 : Dialectal Variation in Moroccan Arabic
Moha Ennaji p. 171
Chapter 9 : Formation and Evolution of Andalusi Arabic and its
Imprint on Modern Northern Morocco
Ángeles Vicente p. 185
Chapter 10 : The Phonetic Implementation of Falling Pitch Accents
in Dialectal Maltese: A Preliminary Study
of the Intonation of Gozitan ?ebbu?i
Alexandra Vella p. 211
Index p. 239



Top

5-1-3Gokhan Tur , R De Mori, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech

Title: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech

Editors: Gokhan Tur and Renato De Mori

Web: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470688246.html

Brief Description (please use as you see fit):

Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors.

Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include:

Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks.

Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas.

Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations.

This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.

Top

5-1-4Jody Kreiman, Diana Van Lancker Sidtis ,Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception

Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception
Jody Kreiman, Diana Van Lancker Sidtis
ISBN: 978-0-631-22297-2
Hardcover
512 pages
May 2011, Wiley-Blackwell

Foundations of Voice Studies provides a comprehensive description and analysis of the multifaceted role that voice quality plays in human existence.

•Offers a unique interdisciplinary perspective on all facets of voice perception, illustrating why listeners hear what they do and how they reach conclusions based on voice quality
•Integrates voice literature from a multitude of sources and disciplines
•Supplemented with practical and approachable examples, including a companion website with sound files, available on publication at www.wiley.com/go/voicestudies
•Explores the choice of various voices in advertising and broadcasting, and voice perception in singing voices and forensic applications
•Provides a straightforward and thorough overview of vocal physiology and control


Top

5-1-5G. Nick Clements and Rachid Ridouane, Where Do Phonological Features Come From?

 

Where Do Phonological Features Come From?

Edited by G. Nick Clements and Rachid Ridouane

CNRS & Sorbonne-Nouvelle

This volume offers a timely reconsideration of the function, content, and origin of phonological features, in a set of papers that is theoretically diverse yet thematically strongly coherent. Most of the papers were originally presented at the International Conference 'Where Do Features Come From?' held at the Sorbonne University, Paris, October 4-5, 2007. Several invited papers are included as well. The articles discuss issues concerning the mental status of distinctive features, their role in speech production and perception, the relation they bear to measurable physical properties in the articulatory and acoustic/auditory domains, and their role in language development. Multiple disciplinary perspectives are explored, including those of general linguistics, phonetic and speech sciences, and language acquisition. The larger goal was to address current issues in feature theory and to take a step towards synthesizing recent advances in order to present a current 'state of the art' of the field.

 

 

Top

5-1-6Dorothea Kolossa and Reinhold Haeb-Umbach: Robust Speech Recognition of Uncertain or Missing Data
Title: Robust Speech Recognition of Uncertain or Missing Data
Editors: Dorothea Kolossa and Reinhold Haeb-Umbach
Publisher: Springer
Year: 2011
ISBN 978-3-642-21316-8
Link:
http://www.springer.com/engineering/signals/book/978-3-642-21316-8?detailsPage=authorsAndEditors

Automatic speech recognition suffers from a lack of robustness with
respect to noise, reverberation and interfering speech. The growing
field of speech recognition in the presence of missing or uncertain
input data seeks to ameliorate those problems by using not only a
preprocessed speech signal but also an estimate of its reliability to
selectively focus on those segments and features that are most reliable
for recognition. This book presents the state of the art in recognition
in the presence of uncertainty, offering examples that utilize
uncertainty information for noise robustness, reverberation robustness,
simultaneous recognition of multiple speech signals, and audiovisual
speech recognition.

The book is appropriate for scientists and researchers in the field of
speech recognition who will find an overview of the state of the art in
robust speech recognition, professionals working in speech recognition
who will find strategies for improving recognition results in various
conditions of mismatch, and lecturers of advanced courses on speech
processing or speech recognition who will find a reference and a
comprehensive introduction to the field. The book assumes an
understanding of the fundamentals of speech recognition using Hidden
Markov Models.
Top

5-1-7Mohamed Embarki et Christelle Dodane: La coarticulation

LA COARTICULATION

 

Mohamed Embarki et Christelle Dodane

Des indices à la représentation

La parole est faite de gestes articulatoires complexes qui se chevauchent dans l’espace et dans le temps. Ces chevauchements, conceptualisés par le terme coarticulation, n’épargnent aucun articulateur. Ils sont repérables dans les mouvements de la mâchoire, des lèvres, de la langue, du voile du palais et des cordesvocales. La coarticulation est aussi attendue par l’auditeur, les segments coarticulés sont mieux perçus. Elle intervient dans les processus cognitifs et linguistiques d’encodage et de décodage de la parole. Bien plus qu’un simple processus, la coarticulation est un domaine de recherche structuré avec des concepts et des modèles propres. Cet ouvrage collectif réunit des contributions inédites de chercheurs internationaux abordant lacoarticulation des points de vue moteur, acoustique, perceptif et linguistique. C’est le premier ouvrage publié en langue française sur cette question et le premier à l’explorer dans différentes langues.

 

 

Collection : Langue & Parole, L'Harmattan

ISBN : 978-2-296-55503-7 • 25 € • 260 pages

 

 

Mohamed Embarki

est maître de conférences-HDR en phonétique à l’université de Franche-Comté (Besançon) et membre du Laseldi (E.A. 2281). Ses travaux portent sur les aspects (co)articulatoires et acoustiques des parlers arabes modernes ainsi que sur leurs motivations sociophonétiques.

Christelle Dodane

est maître de conférences en phonétique à l’université Paul-Valéry (Montpellier 3) et elle est affiliée au laboratoire DIPRALANG (E.A. 739). Ses recherches portent sur la communication langagière chez le jeune enfant (12-36 mois) et notamment sur le rôle de la prosodie dans le passage du niveau pré-linguistique au niveau linguistique, dans la construction de la première syntaxe et dans le langage adressé à l’enfant.

Top

5-1-8Ben Gold, Nelson Morgan, Dan Ellis :Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital]

Speech and Audio Signal Processing: Processing and Perception of Speech and Music [2nd edition]  Ben GoldNelson Morgan, Dan Ellis

Digital copy:  http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/product-description/1118142888

Hardcopy available: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/0470195363/ref=sr_1_1?s=books&ie=UTF8&qid=1319142964&sr=1-1

Top

5-1-9Video Proceedings ERMITES 2011
Actes vidéo des journées ERMITES 2011 'Décomposition Parcimonieuse, Contraction et Structuration pour l'Analyse de Scènes', sont en ligne sur :   http://glotin.univ-tln.fr/ERMITES11

On y retrouve (en .mpg) la vingtaine d'heure des conférences de :

Y. Bengio, Montréal
    «Apprentissage Non-Supervisé de Représentations Profondes »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_1sur4.mp4 ...

S. Mallat, Paris
    « Scattering & Matching Pursuit for Acoustic Sources Separation »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_1sur3.mp4 ...

J.-P. Haton, Nancy
    « Analyse de Scène et Reconnaissance Stochastique de la Parole »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_1sur4.mp4 ...

M. Kowalski, Paris
    « Sparsity and structure for audio signal: a *-lasso therapy »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_1sur5.mp4 ...

O. Adam, Paris
    « Estimation de Densité de Population de Baleines par Analyse de
leurs Chants »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Adam.mp4

X. Halkias, New-York
    « Detection and Tracking of Dolphin Vocalizations »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Halkias.mp4

J. Razik, Toulon
    « Sparse coding : from speech to whales »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Razik.mp4

H. Glotin, Toulon
   « Suivi & reconstruction du comportement de cétacés par acoustique passive »

ps : ERMITES 2012 portera sur la vision (Y. Lecun, Y. Thorpe, P.
Courrieu, M Perreira, M. Van Gerven,...)
Top

5-1-10Zeki Majeed Hassan and Barry Heselwood (Eds): Instrumental Studies in Arabic Phonetics

Instrumental Studies in Arabic Phonetics
Edited by Zeki Majeed Hassan and Barry Heselwood
University of Gothenburg / University of Leeds
[Current Issues in Linguistic Theory, 319] 2011. xii, 365 pp.
Publishing status: Available
Hardbound – Available
ISBN 978 90 272 4837 4 | EUR 110.00 | USD 165.00
e-Book – Forthcoming Ordering information
ISBN 978 90 272 8322 1 | EUR 110.00 | USD 165.00
Brought together in this volume are fourteen studies using a range of modern instrumental methods – acoustic and articulatory – to investigate the phonetics of several North African and Middle Eastern varieties of Arabic. Topics covered include syllable structure, quantity, assimilation, guttural and emphatic consonants and their pharyngeal and laryngeal mechanisms, intonation, and language acquisition. In addition to presenting new data and new descriptions and interpretations, a key aim of the volume is to demonstrate the depth of objective analysis that instrumental methods can enable researchers to achieve. A special feature of many chapters is the use of more than one type of instrumentation to give different perspectives on phonetic properties of Arabic speech which have fascinated scholars since medieval times. The volume will be of interest to phoneticians, phonologists and Arabic dialectologists, and provides a link between traditional qualitative accounts of spoken Arabic and modern quantitative methods of instrumental phonetic analysis.

Acknowledgements  vii – viii
List of contributors  ix – x
Transliteration and transcription symbols for Arabic  xi – xii
Introduction
Barry Heselwood and Zeki Majeed Hassan 1 – 26
Part I. Issues in syntagmatic structure
Preliminary study of Moroccan Arabic word-initial consonant clusters and syllabification using electromagnetic articulography
Adamantios I. Gafos, Philip Hoole and Chakir Zeroual 27 – 46
An acoustic phonetic study of quantity and quantity complementarity in Swedish and Iraqi Arabic
Zeki Majeed Hassan 47 – 62
Assimilation of /l/ to /r/ in Syrian Arabic: An electropalatographic and acoustic study
Barry Heselwood, Sara Howard and Rawya Ranjous 63 – 98
Part II. Guttural consonants
A study of the laryngeal and pharyngeal consonants in Jordanian Arabic using nasoendoscopy, videofluoroscopy and spectrography
Barry Heselwood and Feda Al-Tamimi 99
A phonetic study of guttural laryngeals in Palestinian Arabic using laryngoscopic and acoustic analysis
Kimary N. Shahin 129 – 140
Airflow and acoustic modelling of pharyngeal and uvular consonants in Moroccan Arabic
Mohamed Yeou and Shinji Maeda 141 – 162
Part III. Emphasis and coronal consonants
Nasoendoscopic, videofluoroscopic and acoustic study of plain and emphatic coronals in Jordanian Arabic
Feda Al-Tamimi and Barry Heselwood 163 – 192
Acoustic and electromagnetic articulographic study of pharyngealisation: Coarticulatory effects as an index of stylistic and regional variation in Arabic
Mohamed Embarki, Slim Ouni, Mohamed Yeou, M. Christian Guilleminot and Sallal Al-Maqtari 193 – 216
Investigating the emphatic feature in Iraqi Arabic: Acoustic and articulatory evidence of coarticulation
Zeki Majeed Hassan and John H. Esling 217 – 234
Glottalisation and neutralisation in Yemeni Arabic and Mehri: An acoustic study
Janet C.E. Watson and Alex Bellem 235 – 256
The phonetics of localising uvularisation in Ammani-Jordanian Arabic: An acoustic study
Bushra Adnan Zawaydeh and Kenneth de Jong 257 – 276
EMA, endoscopic, ultrasound and acoustic study of two secondary articulations in Moroccan Arabic: Labial-velarisation vs. emphasis
Chakir Zeroual, John H. Esling and Philip Hoole 277 – 298
Part IV. Intonation and acquisition
Acoustic cues to focus and givenness in Egyptian Arabic
Sam Hellmuth 299 – 324
Acquisition of Lebanese Arabic and Yorkshire English /l/ by bilingual and monolingual children: A comparative spectrographic study
Ghada Khattab 325 – 354
Appendix: Phonetic instrumentation used in the studies  355 – 358

Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2012-01)

*****************************************************************
ELRA - Language Resources Catalogue - Update
*****************************************************************

ELRA is happy to announce that 14 new Speech Resources are now available in its catalogue.

ELRA-S0324 Catalan-SpeechDat For the Fixed Telephone Network Database
This speech database contains the recordings of 2000 Catalan speakers who called from Fixed telephones and who are recorded over the fixed PSTN using and ISDN-BRI interface. Each speaker uttered around 50 read and spontaneous items. The speech database follows the specifications made within the SpeechDat (II) project. The database was validated by UVIGO. The Catalan-SpeechDat for the Fixed Telephone Network Database was funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_38&products_id=1146

ELRA-S0325 Catalan-SpeechDat for the Mobile Telephone Network Database
This speech database contains the recordings of 2000 Catalan speakers who called from GSM telephones and who are recorded over the fixed PSTN using and ISDN-BRI interface. Each speaker uttered around 50 read and spontaneous items. The speech database follows the specifications made within the SpeechDat (II) project. The database was validated by UVIGO. The Catalan-SpeechDat for the Mobile Telephone Network Database was funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_38&products_id=1147

ELRA-S0326 Catalan SpeechDat-Car database
The Catalan SpeechDat-Car database contains the in-car recordings of 300 speakers who uttered from around 120 read and spontaneous items. Each speaker recorded two sessions. Recordings have been made through 4 different channels, via in-car microphones (1 close-talk microphone, 3 far-talk microphones). The 300 Catalan speakers were selected from 5 different dialectal regions and are balanced in gender and age groups. The database was validated by UVIGO. The Catalan-SpeechDat-Car Database was funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1148

ELRA-S0327 Catalan Speecon database
The Catalan Speecon database comprises the recordings of 550 adult Catalan speakers who uttered over 290 items (read and spontaneous). The data were recorded over 4 microphone channels in 4 recording environments (office, entertainment, car, public place). The speech database follows the specifications made within the UE funded Speecon project. The database was validated by UVIGO. The Catalan-Speecon Database was funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1149

ELRA-S0328 Spanish EUROM.1
EUROM1 is a multilingual European speech database. It contains over 60 speakers per language who pronounced numbers, sentences, isolated words ... using close talking microphone in an anecoic room. Equivalent corpora for each of the European languages exist already, with the same number of speakers selected in the same way, and recorded in the same conditions with common file formats.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1150

ELRA-S0329 Emotional speech synthesis database
This database contains the recordings of one male and one female Spanish professional speakers recorded in a noise-reduced room. It consists in recordings and annotations of read text material in neutral style plus six MPEG expressions, all in fast, slow, soft and loud speech styles. The text material is composed of 184 items including phonetically balanced sentences, digits and isolated words. The text material was the same for all the modes and styles, giving a total of 3h 59min recorded speech for the male speaker and 3h 53min for the female speaker. The Emotional speech synthesis database was created within the scope of the Interface EU funded project.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1151

ELRA-S0330 FESTCAT Catalan TTS baseline male speech database
This database contains the recordings of one male Catalan professional speaker recorded in a noise-reduced room simultaneously through a close talk microphone, a mid distance microphone and a laryngograph signal. This database consists in the recordings and annotations of read text material of approximately 10 hours of speech for baseline applications (Text-to-Speech systems). The FESTCAT Catalan TTS Baseline Male Speech Database was created within the scope of the FESTCAT project, funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1152

ELRA-S0331 FESTCAT Catalan TTS baseline female speech database
This database contains the recordings of one female Catalan professional speaker recorded in a noise-reduced room simultaneously through a close talk microphone, a mid distance microphone and a laryngograph signal. It consists in the recordings and annotations of read text material of approximately 10 hours of speech for baseline applications (Text-to-Speech systems). The FESTCAT Catalan TTS Baseline Female Speech Database was created within the scope of the FESTCAT project funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1153

ELRA-S0332 FESTCAT Catalan TTS baseline speech database - 8 speakers
This database contains the recordings of four female and four male Catalan professional speakers recorded in a noise-reduced room simultaneously through a close talk microphone, a mid distance microphone and a laryngograph signal. It consists of the recordings and annotations of read text material of approximately 1 hour of speech per speaker for baseline applications (Text-to-Speech systems). The FESTCAT Catalan TTS baseline speech database - 8 speakers was created within the scope of the FESTCAT project funded by the Catalan Government.
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1154

ELRA-S0333 Spanish Festival HTS models - male speech
This database contains the Festival HTS models trained with 10h of speech from the TC-STAR Spanish Baseline Male Speech Database (ELRA-S0310).
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1155

ELRA-S0334 Spanish Festival HTS models - female speech
This database contains the Festival HTS models trained with 10h of speech from the TC-STAR Spanish Baseline Female Speech Database (ELRA-S0309).
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1156

ELRA-S0335 Bilingual (Spanish-English) Speech synthesis HTS models
This database contains Bilingual (English and Spanish) Festival HTS models. Models were trained with 9h of speech from 2 female bilingual speakers and 2 male bilingual speakers. Each speaker recorded 2h 15 min per language. The speech data can be found in the TC-STAR Bilingual Voice-Conversion Spanish Speech Database (ELRA-S0311) and in the TC-STAR Bilingual Expressive Spanish Speech Database (ELRA-S0313).
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1157

ELRA-S0336 Spanish Festival voice male
This database contains the recordings of one male Spanish speaker recorded in a noise-reduced room simultaneously through a close talk microphone, a mid distance microphone and a laryngograph signal. This comprises read text material of approximately 10 hours of speech for baseline applications (Text-to-Speech systems). The database includes Festival-compatible annotations. The recordings can be also found under TC-STAR Spanish Baseline Male Speech Database (ELRA-S0310).
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1158

ELRA-S0337 Spanish Festival voice female
This database contains the recordings of one female Spanish speaker recorded in a noise-reduced room simultaneously through a close talk microphone, a mid distance microphone and a laryngograph signal, of read text material of approximately 10 hours of speech for baseline applications (Text-to-Speech systems). The database includes Festival-compatible annotations. The recordings can be also found under TC-STAR Spanish Baseline Female Speech Database (ELRA-S0309).
For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1159
 

For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html

Top

5-2-2LDC Newsletter (February 2012)

In this newsletter:

-   Spring 2012 LDC Data Scholarship Recipients!  -

Membership Fee Savings and Publications Pipeline for MY2012  -

New publications:

LDC2012S03
-   Digital Archive of Southern Speech (DASS)  -

LDC2012T01
-   ModeS TimeBank 1.0  -



Spring 2012 LDC Data Scholarship Recipients!

LDC is pleased to announce the student recipients of the Spring 2012 LDC Data Scholarship program!  This program provides university students with access to LDC data at no-cost. Students were asked to complete an application which consisted of a proposal describing their intended use of the data, as well as a letter of support from their thesis adviser. We received many solid applications and have chosen six proposals to support.   The following students will receive no-cost copies of LDC data:

Zainab Ali Khalaf  – University of Science, Malaysia (Malaysia), graduate student, Computer Science. Zainab has been awarded a copy of 1996 English Broadcast News Transcripts (HUB4) (LDC97T22) for her work in spoken document retrieval. 

Daniel Jettka – Trinity College Dublin (Ireland), graduate student, Centre for Language & Communication Studies.  Daniel has been awarded  copies of Penn Discourse Treebank Version 2.0 (LDC2008T05) and RST Discourse Treebank (LDC2002T07) for his work in anaphora resolution.

Olga Nickolaevna Ladoshko - National Technical University of Ukraine “KPI” (Ukraine), graduate student, Acoustics and Acoustoelectronics. Olga has been awarded copies of  NTIMT (LDC93S2) and STC-TIMIT 1.0 (LDC2008S03) for her research in automatic speech recognition for Ukrainian.

Ming Yang, Xiaoxiao Ma, and Jiajia Huang – Wuhan University (China), graduate students, Computer Science.  Ming, Xiaoxiao, and Jiajia have been awarded  copies of ACE Time Normalization (TERN) 2004 English Training Data v 1.0 (LDC2005T07) and GALE Phase 1 Chinese Broadcast News Parallel Text – Part 1 (LDC2007T23) for their work in summarization and data mining.

Daria Vazhenina – University of Aizu (Japan), graduate student, Human Interface Lab.  Daria has been awarded a copy of 2005 Spring NIST Rich Transcription (RT-05S) Evaluation Set (LDC2011S06) for her work in speaker diarization.

Tanina Zappone - University of Rome “La Sapienza” (Italy), graduate student, Oriental Studies.  Tanina has been awarded a copy of Chinese Treebank 7.0 (LDC2010T07) for her work in China’s political communications.

Please join us in congratulating our student recipients!   The next LDC Data Scholarship program is scheduled for the Fall 2012 semester.

 

Membership Fee Savings and Publications Pipeline for MY2012

Time is quickly running out to save on membership fees for MY2012! Any organization which joins or renews membership for 2012 through Thursday, March 1, 2012, is entitled to a 5% discount on membership fees.  Organizations which held membership for MY2011 can receive a 10% discount on fees provided they renew prior to March 1, 2012.

Many publications for MY2012 are still in development. The planned publications for the upcoming months include:

ARRAU (Anaphor Resolution and Underspecification) ~ data annotated for anaphoric relations, with information about agreement and explicit representation of multiple antecedents for ambiguous anaphoric expressions and discourse antecedents for expressions which refer to abstract entities such as events, actions and plans. The corpus contains texts from various genres: task-oriented dialogues from the TRAINS project, narratives from the English Pear Stories , and newspaper articles from the Wall Street Journal portion of the Penn Treebank.

MALACH English ~ over 300 hours of English audio recordings of interviews conducted under the auspices of the USC Shoah Foundation Institute for Visual History and Education and associated transcripts produced as part of the Multilingual Access to Large Spoken ArCHives (MALACH) project. 

Malto Speech and Transcripts ~ speech files of Malto narratives recorded by Masato Kobayashi and Bablu Tirkey with associated transcripts. Malto is a Dravidian language spoken in northeastern India and Bangladesh.

NIST/USF Evaluation Resources for the VACE Program – Broadcast News ~ English broadcast news video annotated for the VACE (Video Analysis and Content Extraction) 2005 face, text and text word detection and tracking tasks.

OntoNotes 5.0 ~ multiple genres of English, Chinese, and Arabic text annotated for syntax, predicate argument structure and shallow semantics.

2012 Subscription Members are automatically sent all MY2012 data as it is released.  2012 Standard Members are entitled to request 16 corpora for free from MY2012.   Non-members may license most data for research use.

New publications

(1) Digital Archive of Southern Speech (DASS) was developed by the University of Georgia. It is a subset of the Linguistic Atlas of the Gulf States (LAGS), which is in turn part of the Linguist Atlas Project (LAP). DASS contains approximately 370 hours of English speech data from 30 female speakers and 34 male speakers in .wav format and in .mp3 format, along with associated metadata about the speakers and the recordings and maps in .jpeg format relating to the recording locations.

LAP consists of a set of survey research projects about the words and pronunciation of everyday American English, the largest project of its kind in the United States. Interviews with thousands of native speakers across the country have been carried out since 1929. LAGS surveyed the everyday speech of Georgia, Tennessee, Florida, Alabama, Mississippi, Arkansas, Louisiana, and Texas in a series of 914 audio-taped interviews conducted from 1968-1983. Interviews average approximately six hours in length; the systematic LAGS tape archive amounts to 5500 hours of sound recordings. DASS is a collection of 64 interviews from LAGS selected to cover a range of speech across the region and to represent multiple education levels and ethnic backgrounds.

Also included in this release is a version of the LICHEN software developed at the University of Oulu, Finland. LICHEN allows users to browse and search through the audio data in a more advanced fashion using a graphical interface.

Digital Archive of Southern Speech (DASS) is distributed on one hard disc drive.

2012 Subscription Not-for-Profit/US Government Members will automatically receive one copy of this data.  2012 For-Profit Members will receive a copy provided that they have submitted a completed copy of the User License Agreement for Digital Archive of Southern Speech (LDC2012S03).  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$250.

 *

(2) ModeS TimeBank 1.0 was developed by researchers at Technical University of Madrid and Barcelona Media and is a corpus of Modern Spanish (17th and 18th centuries) annotated with temporal and event information according to TimeML mark-up and annotated with spatial information following the SpatialML scheme.

TimeML (Pustejovsky et al., 2005) is a specification language for annotating eventualities and time expressions in natural language as well as the temporal relations among them, thus facilitating the task of extraction, representation and exchange of temporal information. SpatialML (Mani et al., 2008) is a specification language for annotating and normalizing spatial expressions by means of geographic coordinates.

ModeS TimeBank 1.0 contains 102 documents reporting a sea-crossing cruise by a ship called La Princesa, which took place from December 1768 to April 1769. There exist copious logbooks from that period that not only provide information about shipping routes, but also contain valuable data concerning information flows, commercial agents and social networks.

All text is encoded in UTF-8. The data in ModeS TimeBank 1.0 has been tokenized, POS-tagged, and annotated with space, time and event information according to the TimeML and SpatialML specification schemes.

ModeS TimeBank 1.0 is distributed via web download. 

2012 Subscription Members will automatically receive two copies of this corpus on disc.  2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may request this data by completing a copy of the LDC User Agreement for Non-Members.  The agreement can be faxed +1 215 573 2175 or scanned and emailed to this address.  This data is available at no charge.

Top

5-2-3Speechocean January 2012 update

Speechocean - Language Resource Catalogue - New Released (01- 2012)

Speechocean, as a global provider of language resources and data services, has more than 200 large-scale databases available in 80+ languages and accents covering the fields of Text to Speech, Automatic Speech Recognition, Text, Machine Translation, Web Search, Videos, Images etc.

 

Speechocean is glad to announce that more Speech Resources has been released:

 

Chinese and English Mixing Speech Synthesis Database (Female)

The Chinese Mandarin TTS Speech Corpus contains the read speech of a native Chinese Female professional broadcaster recorded in a studio with high SNR (>35dB) over two channels (AKG C4000B microphone and Electroglottography (EGG) sensor). 
The Corpus includes the following categories:
1.    Basic Mandarin sub-corpus: including 5,000 utterances which were carefully designed considering all kinds of linguistic phenomena. All sentences were declarative and extracted from News channels of People's Daily, China Daily, etc. The prompts with negative words were carefully excluded. ONLY suitable length sentences were accepted (7~20 words, in average 14 words). This sub-corpus can be used for R&D of HMM-based TTS, Limit domain TTS and Small-scale concatenative TTS;
2.    Complementary Mandarin sub-corpus: including 10,000 utterances which were carefully designed considering all kinds of linguistic phenomena. All sentences were declarative and extracted from News channels of People's Daily, China Daily, etc. The prompts with negative words are carefully excluded. ONLY suitable length sentences were accepted (7~20 words, average 14 words). This sub-corpus is a complementary corpus for Basic Mandarin sub-corpus and can be used for R&D of Large-scale concatenative TTS;
3.    Mandarin Neutral sub-corpus: including 380 Chinese bi-syllable words which embedded in carrier sentences;
4.    Mandarin ERHUA sub-corpus: including 290 Chinese Erhua syllables which embedded in carrier sentences;
5.    Mandarin Digit-String sub-corpus: including 1250 utterances with 3-digit length which considered the different pronunciation of 1, i.e. “yi1” and “yao1”.
6.    Mandarin Question sub-corpus: including 300 question sentences with common used question mark, for example “吗”, “么”, “呢”, and etc.;
7.    Mandarin exclamatory sub-corpus: including 200 exclamatory sentences with common used exclamatory mark, for example “呀”, “啊”, “吧”, “啦”, and etc.;
8.    Chinese English sentence sub-corpus: including 1,000 sentences which were carefully designed considering bi-phone coverage. All sentences were extracted from News channels of Voice of America (VOA), and etc. The prompts with negative words are carefully excluded. ONLY suitable length sentences were accepted (7~20 words, in average 12 words) and phonetically annotated with SAMPA. This sub-corpus can be used for R&D of HMM-based TTS, Limit domain TTS and Small-scale concatenative TTS;
9.    Chinese English words sub-corpus: including about 6,000 commonly used English words which embedded in carrier sentence;
10.    Chinese English Abbreviation sub-corpus: including about 200 utterances which considered not only the alphabet coverage, but also the combination of character and digit, such as “MP4”;
11.    Chinese English Letter sub-corpus: including 26 carrier utterances with each letter embedded in the Beginning, Middle and End;
12.    Chinese Greek Letter sub-corpus: including 24 carrier utterances with each letter embedded in the Beginning, Middle and End.

All speech data are segmented and labeled on phone level. Pronunciation lexicon and pitch extract from EEG can also be provided based on demands.

 

France French Speech Recognition Corpus (desktop) – 50 speakers

This France French desktop speech recognition database was collected by SpeechOcean in France. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

UK English Speech Recognition Corpus (desktop) – 50 speakers

This UK English desktop speech recognition database was collected by SpeechOcean in England. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

US English Speech Recognition Corpus (desktop) – 50 speakers

This US English desktop speech recognition database was collected by SpeechOcean in America. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (25 males, 25 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

Italian Speech Recognition Corpus (desktop) – 50 speakers

This Italian desktop speech recognition database was collected by SpeechOcean in Italy. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (23 males, 27 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

For more information about our Database and Services please visit our website www.Speechocen.com or visit our on-line Catalogue at http://www.speechocean.com/en-Product-Catalogue/Index.html

If you have any inquiry regarding our databases and service please feel free to contact us:

Xianfeng Cheng mailto: Chengxianfeng@speechocean.com

Marta Gherardi mailto: Marta@speechocean.com

 

 

Top

5-2-4ELDA Distribution Campaign 2011

*****************************************************************

ELDA Distribution Campaign 2011

*****************************************************************

ELDA is launching a special distribution campaign offering very favorable conditions for the language resources acquisition,
including discounts on public prices, from the ELRA Catalogue of Language Resources (see http://catalog.elra.info).

This offer will be open until the end of December 2011.
 
For more information on this offer, please contact Valérie Mapelli (mapelli@elda.org)

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html

Top

5-3 Software
5-3-1Matlab toolbox for glottal analysis

I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:

 

http://tcts.fpms.ac.be/~drugman/Toolbox/

 

This toolbox includes the following modules:

 

- Pitch and voiced-unvoiced decision estimation

- Speech polarity detection

- Glottal Closure Instant determination

- Glottal flow estimation

 

By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”:

http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf

 

where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.

 

Hoping that this might be useful to you, and to see you soon,

 

Thomas Drugman

Top

5-3-2ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA