ISCA - International Speech
Communication Association


ISCApad Archive  »  2012  »  ISCApad #167  »  Resources

ISCApad #167

Sunday, May 13, 2012 by Chris Wellekens

5 Resources
5-1 Books
5-1-1Gokhan Tur , R De Mori, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech

Title: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech

Editors: Gokhan Tur and Renato De Mori

Web: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470688246.html

Brief Description (please use as you see fit):

Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors.

Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include:

Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks.

Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas.

Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations.

This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.

Back  Top

5-1-2Jody Kreiman, Diana Van Lancker Sidtis ,Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception

Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception
Jody Kreiman, Diana Van Lancker Sidtis
ISBN: 978-0-631-22297-2
Hardcover
512 pages
May 2011, Wiley-Blackwell

Foundations of Voice Studies provides a comprehensive description and analysis of the multifaceted role that voice quality plays in human existence.

•Offers a unique interdisciplinary perspective on all facets of voice perception, illustrating why listeners hear what they do and how they reach conclusions based on voice quality
•Integrates voice literature from a multitude of sources and disciplines
•Supplemented with practical and approachable examples, including a companion website with sound files, available on publication at www.wiley.com/go/voicestudies
•Explores the choice of various voices in advertising and broadcasting, and voice perception in singing voices and forensic applications
•Provides a straightforward and thorough overview of vocal physiology and control


Back  Top

5-1-3G. Nick Clements and Rachid Ridouane, Where Do Phonological Features Come From?

 

Where Do Phonological Features Come From?

Edited by G. Nick Clements and Rachid Ridouane

CNRS & Sorbonne-Nouvelle

This volume offers a timely reconsideration of the function, content, and origin of phonological features, in a set of papers that is theoretically diverse yet thematically strongly coherent. Most of the papers were originally presented at the International Conference 'Where Do Features Come From?' held at the Sorbonne University, Paris, October 4-5, 2007. Several invited papers are included as well. The articles discuss issues concerning the mental status of distinctive features, their role in speech production and perception, the relation they bear to measurable physical properties in the articulatory and acoustic/auditory domains, and their role in language development. Multiple disciplinary perspectives are explored, including those of general linguistics, phonetic and speech sciences, and language acquisition. The larger goal was to address current issues in feature theory and to take a step towards synthesizing recent advances in order to present a current 'state of the art' of the field.

 

 

Back  Top

5-1-4Dorothea Kolossa and Reinhold Haeb-Umbach: Robust Speech Recognition of Uncertain or Missing Data
Title: Robust Speech Recognition of Uncertain or Missing Data
Editors: Dorothea Kolossa and Reinhold Haeb-Umbach
Publisher: Springer
Year: 2011
ISBN 978-3-642-21316-8
Link:
http://www.springer.com/engineering/signals/book/978-3-642-21316-8?detailsPage=authorsAndEditors

Automatic speech recognition suffers from a lack of robustness with
respect to noise, reverberation and interfering speech. The growing
field of speech recognition in the presence of missing or uncertain
input data seeks to ameliorate those problems by using not only a
preprocessed speech signal but also an estimate of its reliability to
selectively focus on those segments and features that are most reliable
for recognition. This book presents the state of the art in recognition
in the presence of uncertainty, offering examples that utilize
uncertainty information for noise robustness, reverberation robustness,
simultaneous recognition of multiple speech signals, and audiovisual
speech recognition.

The book is appropriate for scientists and researchers in the field of
speech recognition who will find an overview of the state of the art in
robust speech recognition, professionals working in speech recognition
who will find strategies for improving recognition results in various
conditions of mismatch, and lecturers of advanced courses on speech
processing or speech recognition who will find a reference and a
comprehensive introduction to the field. The book assumes an
understanding of the fundamentals of speech recognition using Hidden
Markov Models.
Back  Top

5-1-5Mohamed Embarki et Christelle Dodane: La coarticulation

LA COARTICULATION

 

Mohamed Embarki et Christelle Dodane

Des indices à la représentation

La parole est faite de gestes articulatoires complexes qui se chevauchent dans l’espace et dans le temps. Ces chevauchements, conceptualisés par le terme coarticulation, n’épargnent aucun articulateur. Ils sont repérables dans les mouvements de la mâchoire, des lèvres, de la langue, du voile du palais et des cordesvocales. La coarticulation est aussi attendue par l’auditeur, les segments coarticulés sont mieux perçus. Elle intervient dans les processus cognitifs et linguistiques d’encodage et de décodage de la parole. Bien plus qu’un simple processus, la coarticulation est un domaine de recherche structuré avec des concepts et des modèles propres. Cet ouvrage collectif réunit des contributions inédites de chercheurs internationaux abordant lacoarticulation des points de vue moteur, acoustique, perceptif et linguistique. C’est le premier ouvrage publié en langue française sur cette question et le premier à l’explorer dans différentes langues.

 

 

Collection : Langue & Parole, L'Harmattan

ISBN : 978-2-296-55503-7 • 25 € • 260 pages

 

 

Mohamed Embarki

est maître de conférences-HDR en phonétique à l’université de Franche-Comté (Besançon) et membre du Laseldi (E.A. 2281). Ses travaux portent sur les aspects (co)articulatoires et acoustiques des parlers arabes modernes ainsi que sur leurs motivations sociophonétiques.

Christelle Dodane

est maître de conférences en phonétique à l’université Paul-Valéry (Montpellier 3) et elle est affiliée au laboratoire DIPRALANG (E.A. 739). Ses recherches portent sur la communication langagière chez le jeune enfant (12-36 mois) et notamment sur le rôle de la prosodie dans le passage du niveau pré-linguistique au niveau linguistique, dans la construction de la première syntaxe et dans le langage adressé à l’enfant.

Back  Top

5-1-6Ben Gold, Nelson Morgan, Dan Ellis :Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital]

Speech and Audio Signal Processing: Processing and Perception of Speech and Music [2nd edition]  Ben GoldNelson Morgan, Dan Ellis

Digital copy:  http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/product-description/1118142888

Hardcopy available: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/0470195363/ref=sr_1_1?s=books&ie=UTF8&qid=1319142964&sr=1-1

Back  Top

5-1-7Video Proceedings ERMITES 2011
Actes vidéo des journées ERMITES 2011 'Décomposition Parcimonieuse, Contraction et Structuration pour l'Analyse de Scènes', sont en ligne sur :   http://glotin.univ-tln.fr/ERMITES11

On y retrouve (en .mpg) la vingtaine d'heure des conférences de :

Y. Bengio, Montréal
    «Apprentissage Non-Supervisé de Représentations Profondes »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_1sur4.mp4 ...

S. Mallat, Paris
    « Scattering & Matching Pursuit for Acoustic Sources Separation »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_1sur3.mp4 ...

J.-P. Haton, Nancy
    « Analyse de Scène et Reconnaissance Stochastique de la Parole »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_1sur4.mp4 ...

M. Kowalski, Paris
    « Sparsity and structure for audio signal: a *-lasso therapy »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_1sur5.mp4 ...

O. Adam, Paris
    « Estimation de Densité de Population de Baleines par Analyse de
leurs Chants »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Adam.mp4

X. Halkias, New-York
    « Detection and Tracking of Dolphin Vocalizations »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Halkias.mp4

J. Razik, Toulon
    « Sparse coding : from speech to whales »
     http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Razik.mp4

H. Glotin, Toulon
   « Suivi & reconstruction du comportement de cétacés par acoustique passive »

ps : ERMITES 2012 portera sur la vision (Y. Lecun, Y. Thorpe, P.
Courrieu, M Perreira, M. Van Gerven,...)
Back  Top

5-1-8Zeki Majeed Hassan and Barry Heselwood (Eds): Instrumental Studies in Arabic Phonetics

Instrumental Studies in Arabic Phonetics
Edited by Zeki Majeed Hassan and Barry Heselwood
University of Gothenburg / University of Leeds
[Current Issues in Linguistic Theory, 319] 2011. xii, 365 pp.
Publishing status: Available
Hardbound – Available
ISBN 978 90 272 4837 4 | EUR 110.00 | USD 165.00
e-Book – Forthcoming Ordering information
ISBN 978 90 272 8322 1 | EUR 110.00 | USD 165.00
Brought together in this volume are fourteen studies using a range of modern instrumental methods – acoustic and articulatory – to investigate the phonetics of several North African and Middle Eastern varieties of Arabic. Topics covered include syllable structure, quantity, assimilation, guttural and emphatic consonants and their pharyngeal and laryngeal mechanisms, intonation, and language acquisition. In addition to presenting new data and new descriptions and interpretations, a key aim of the volume is to demonstrate the depth of objective analysis that instrumental methods can enable researchers to achieve. A special feature of many chapters is the use of more than one type of instrumentation to give different perspectives on phonetic properties of Arabic speech which have fascinated scholars since medieval times. The volume will be of interest to phoneticians, phonologists and Arabic dialectologists, and provides a link between traditional qualitative accounts of spoken Arabic and modern quantitative methods of instrumental phonetic analysis.

Acknowledgements  vii – viii
List of contributors  ix – x
Transliteration and transcription symbols for Arabic  xi – xii
Introduction
Barry Heselwood and Zeki Majeed Hassan 1 – 26
Part I. Issues in syntagmatic structure
Preliminary study of Moroccan Arabic word-initial consonant clusters and syllabification using electromagnetic articulography
Adamantios I. Gafos, Philip Hoole and Chakir Zeroual 27 – 46
An acoustic phonetic study of quantity and quantity complementarity in Swedish and Iraqi Arabic
Zeki Majeed Hassan 47 – 62
Assimilation of /l/ to /r/ in Syrian Arabic: An electropalatographic and acoustic study
Barry Heselwood, Sara Howard and Rawya Ranjous 63 – 98
Part II. Guttural consonants
A study of the laryngeal and pharyngeal consonants in Jordanian Arabic using nasoendoscopy, videofluoroscopy and spectrography
Barry Heselwood and Feda Al-Tamimi 99
A phonetic study of guttural laryngeals in Palestinian Arabic using laryngoscopic and acoustic analysis
Kimary N. Shahin 129 – 140
Airflow and acoustic modelling of pharyngeal and uvular consonants in Moroccan Arabic
Mohamed Yeou and Shinji Maeda 141 – 162
Part III. Emphasis and coronal consonants
Nasoendoscopic, videofluoroscopic and acoustic study of plain and emphatic coronals in Jordanian Arabic
Feda Al-Tamimi and Barry Heselwood 163 – 192
Acoustic and electromagnetic articulographic study of pharyngealisation: Coarticulatory effects as an index of stylistic and regional variation in Arabic
Mohamed Embarki, Slim Ouni, Mohamed Yeou, M. Christian Guilleminot and Sallal Al-Maqtari 193 – 216
Investigating the emphatic feature in Iraqi Arabic: Acoustic and articulatory evidence of coarticulation
Zeki Majeed Hassan and John H. Esling 217 – 234
Glottalisation and neutralisation in Yemeni Arabic and Mehri: An acoustic study
Janet C.E. Watson and Alex Bellem 235 – 256
The phonetics of localising uvularisation in Ammani-Jordanian Arabic: An acoustic study
Bushra Adnan Zawaydeh and Kenneth de Jong 257 – 276
EMA, endoscopic, ultrasound and acoustic study of two secondary articulations in Moroccan Arabic: Labial-velarisation vs. emphasis
Chakir Zeroual, John H. Esling and Philip Hoole 277 – 298
Part IV. Intonation and acquisition
Acoustic cues to focus and givenness in Egyptian Arabic
Sam Hellmuth 299 – 324
Acquisition of Lebanese Arabic and Yorkshire English /l/ by bilingual and monolingual children: A comparative spectrographic study
Ghada Khattab 325 – 354
Appendix: Phonetic instrumentation used in the studies  355 – 358

Back  Top

5-1-9G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing

'Audiovisual
Speech Processing' édité par G. Bailly, P. Perrier & E. Vatikiotis-Batesonn chez
Cambridge University Press ?

'When we speak, we configure the vocal tract which shapes the visible motions of the face
and the patterning of the audible speech acoustics. Similarly, we use these visible and
audible behaviors to perceive speech. This book showcases a broad range of research
investigating how these two types of signals are used in spoken communication, how they
interact, and how they can be used to enhance the realistic synthesis and recognition of
audible and visible speech. The volume begins by addressing two important questions about
human audiovisual performance: how auditory and visual signals combine to access the
mental lexicon and where in the brain this and related processes take place. It then
turns to the production and perception of multimodal speech and how structures are
coordinated within and across the two modalities. Finally, the book presents overviews
and recent developments in machine-based speech recognition and synthesis of AV speech. '


Back  Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2012-03)

*****************************************************************
ELRA - Language Resources Catalogue - Update
*****************************************************************

ELRA is happy to announce that 1 new Monolingual Lexicon, 3 new Speech Resources and 3 new Evaluation Packages are now available in its catalogue.

Moreover, updated versions of the ESTER Corpus, ESTER Evaluation Package and Bulgarian WordNet have also been released. 

1) New Language Resources:

ELRA-L0088 Arabic Morphological Dictionary
The Arabic Morphological Dictionary contains 7,912,551 entries, including 6,247,291 nouns, 1,537,499 verbs, 127,563 adjectives, 198 grammatical words. All files are provided as plain text in UTF8 character encoding, which represents about 154 Mb of data.

For more information, see: http://catalog.elra.info/product_info.php?products_id=1163

ELRA-S0338 ESTER 2 Corpus
ESTER 2 Corpus, produced within the ESTER 2 evaluation campaign, consists of a manually transcribed radio broadcast news corpus amounting about 100 hours and quick transcriptions of African radios amounting about 6 hours. An annotation of named entities is provided within the development data (about 6 hours).

For more information, see: http://catalog.elra.info/product_info.php?cPath=37_46&products_id=1167

ELRA-S0339 Acoustic database for Polish unit selection speech synthesis
This database contains parliamentary statements and newspaper reviews read by a semi-professional male speaker. It consists of a selection of 2150 sentences annotated and manually verified, including 100 rare phonemes in words. The total duration of the recordings is 3.45 hours. The database is phonetically annotated and manually corrected, which represents a lexicon of 11761 words with phonetic transcription.

For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1164

ELRA-S0342 Acoustic database for Polish concatenative speech synthesis
This database consists of 1443 nonsense words including all the diphones for the Polish language. The database includes information such as: the name of the diphone, context of the diphone, phonetic transcription in SAMPA, identifier of the wave file where it is placed, and three numbers: the beginning, the middle and the end of the diphone.

For more information, see: http://catalog.elra.info/product_info.php?cPath=37_39&products_id=1168

ELRA-E0035 DEFT'08 Evaluation Package
DEFT (DEfi Fouille de Texte – Text Mining Challenge) organizes evaluation campaigns in the field of text mining. The topic of DEFT 2008 edition is related to the classification of texts by topics and genres. DEFT’08 Evaluation Package enables to compare two corpora with different genres (a newspaper article corpus extracted from Le Monde newspaper and a corpus of encyclopaedic articles extracted from the internet free encyclopaedia, Wikipedia) on the basis of the same set of pre-defined categories.

For more information, see: http://catalog.elra.info/product_info.php?products_id=1165

ELRA-E0039 CLEF QAST (2007-2009) – Evaluation Package
The CLEF QAST (2007-2009) contains the data used for the Question Answering on Speech Transcripts tracks of the CLEF campaigns carried out from 2007 to 2009. These tracks tested the performance of monolingual Question Answering systems on collections of audio transcriptions.

For more information, see: http://catalog.elra.info/product_info.php?products_id=1162

ELRA-E0040 MEDAR Evaluation Package
The MEDAR Evaluation Package was produced within the project MEDAR (MEDiterranean ARabic language and speech technology), supported by the European Commission's ICT programme. It aims to enable the evaluation of SLT /MT (Machine Translation) systems for translation tasks applying to the English-to-Arabic direction.

For more information, see: http://catalog.elra.info/product_info.php?cPath=42_43&products_id=1166

2) Updated Language Resources:

 
ELRA-S0241 ESTER Corpus
This new release contains 100 hours of orthographically transcribed news broadcast (instead of 60 hours for the previous release).

The ESTER Corpus is a subset of the ESTER Evaluation Package (catalogue ref. ELRA-E0021), which was produced within the French national project ESTER (Evaluation of Broadcast News enriched transcription systems), as part of the Technolangue programme funded by the French Ministry of Research and New Technologies (MRNT). The ESTER project enabled to carry out a campaign for the evaluation of Broadcast News enriched transcription systems for French.
For more information, see: http://catalog.elra.info/product_info.php?products_id=999

ELRA-E0021 ESTER Evaluation Package
This new release contains 100 hours of orthographically transcribed news broadcast (instead of 60 hours for the previous release).
The ESTER Evaluation Package was produced within the French national project ESTER (Evaluation of Broadcast News enriched transcription systems), as part of the Technolangue programme funded by the French Ministry of Research and New Technologies (MRNT). The ESTER project enabled to carry out a campaign for the evaluation of Broadcast News enriched transcription systems for French.

This package includes the material that was used for the ESTER evaluation campaign. It includes resources, protocols, scoring tools, results of the campaign, etc., that were used or produced during the campaign. The aim of these evaluation packages is to enable external players to evaluate their own system and compare their results with those obtained during the campaign itself.
The campaign is distributed over three actions: orthographic transcription, segmentation and information extraction (named entity tracking).

For more information, see: http://catalog.elra.info/product_info.php?products_id=995

ELRA-M0041 Bulgarian WordNet
This new release contains
38209 synsets (instead of 23715 synsets for the previous release).
The Bulgarian WordNet is a network of lexical-semantic relations, an electronic thesaurus with a structure modelled on that of the Princeton WordNet and those constructed in the EuroWordNet and BalkaNet project. Bulgarian WordNet describes meaning of a lexical unit by placing it within a network of semantic relations, such as hypernyny, meronymy, antonymy etc. It contains 38209 synsets, 83493 literals, 89242 relations (including 58095 semantic relations, 4172 extralinguistic relations).

For more information, see: http://catalog.elra.info/product_info.php?cPath=42_45&products_id=802


For more information on the catalogue, please contact Valérie Mapelli
mailto:mapelli@elda.org

Visit our On-line Catalogue:
http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.htm

Back  Top

5-2-2LDC News (April 2012)

In this newsletter:

-  LDC Timeline – Two Decades of Milestones  -

 

New publications:

LDC2012V01
-  2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News  -

LDC2012T03
-  2009 HYPERLINK 'imap://ldc@imap.ldc.upenn.edu:993/fetch%3EUID%3E/INBOX%3E11012'CoNLLHYPERLINK 'imap://ldc@imap.ldc.upenn.edu:993/fetch%3EUID%3E/INBOX%3E11012' Shared Task Part 1  -

LDC2012T04
-  2009 HYPERLINK 'imap://ldc@imap.ldc.upenn.edu:993/fetch%3EUID%3E/INBOX%3E11012'CoNLLHYPERLINK 'imap://ldc@imap.ldc.upenn.edu:993/fetch%3EUID%3E/INBOX%3E11012' Shared Task Part 2  -

LDC2012S05
-  USC-SFI MALACH Interviews and Transcripts English  -

 

 

LDC Timeline – Two Decades of Milestones

April 15 marks the 'official' 20th anniversary of LDC’s founding. We’ll be featuring highlights from the last two decades in upcoming newsletters, on the web and elsewhere.  For a start, here’s a brief timeline of significant milestones.

1992: The University of Pennsylvania is chosen as the host site for LDC in response to a call for proposals issued by DARPA; the mission of the new consortium is to operate as a specialized data publisher and archive guaranteeing widespread, long-term availability of language resources. DARPA provides seed money with the stipulation that LDC become self-sustaining within five years.  Mark Liberman assumes duties as LDC’s Director with a staff that grows to four, including Jack Godfrey, the Consortium’s first Executive Director.

1993: LDC’s catalog debuts. Early releases include benchmark data sets such as TIMIT, TIPSTER, CSR and Switchboard, shortly followed by the Penn Treebank. 

1994: LDC and NIST (the National Institute of Standards and Technology) enter into a Cooperative R&D Agreement that provides the framework for the continued collaboration between the two organizations.

1995: Collection of conversational telephone speech and broadcast programming and transcription commences. LDC begins its long and continued support for NIST common task evaluations by providing custom data sets for participants. Membership and data license fees prove sufficient to support LDC operations, satisfying the requirement that the Consortium be self-sustaining.

1996: The Lexicon Development project, under the direction of Dr. Cynthia McLemore, begins releasing pronouncing lexicons in Mandarin, German, Egyptian Colloquial Arabic, Spanish, Japanese, and American English. By 1997 all 6 are published.

1997: LDC announces LDC Online, a searchable index of newswire and speech data with associated tools to compute n-gram models, mutual information and other analyses.

1998: LDC adds annotation to its task portfolio. Christopher Cieri joins LDC as Executive Director and develops the annotation operation.

1999: Steven Bird joins LDC; the organization begins to develop tools and best practices for general use. The Annotation Graph Toolkit results from this effort.

2000: LDC expands its support of common task evaluations from providing corpora to coordinating language resources across the program. Early examples include the DARPA TIDES, EARS and GALE programs.

2001: The Arabic treebank project begins.

2002: LDC moves to its current facilities at 3600 Market Street, Philadelphia with a full-time staff of approximately 40 persons.

2004: LDC introduces the Standard and Subscription membership options, allowing members to choose whether to receive all or a subset of the data sets released in a membership year.

2005: LDC makes task specifications and guidelines available through its projects web pages.

2008: LDC introduces programs that provide discounts for continuing members and those who renew early in the year.

2010: LDC inaugurates the Data Scholarship program for students with a demonstrable need for data.

2012: LDC’s full-time staff of 50 and 196 part-time staff support ongoing projects and operations which include collecting, developing and archiving data, data annotation, tool development, sponsored-project support and multiple collaborations with various partners.  The general catalog contains over 500 holdings in more than 50 languages. Over 85,000 copies of more than 1300 titles have been distributed to 3200 organizations in 70 countries. 

 

New Publications

(1) 2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately 60 hours of English broadcast news video data collected by LDC in 1998 and annotated for the 2005 VACE (Video Analysis and Content Extraction) tasks. The tasks covered by the broadcast news domain were human face (FDT) tracking, text strings (TDT) (glyphs rendered within the video image for the text object detection and tracking task) and word level text strings (TDT_Word_Level) (videotext OCR task).

The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences.

Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants.

The broadcast news recordings were collected by LDC in 1998 from CNN Headline News (CNN-HDL) and ABC World News Tonight (ABC-WNT). CNN HDL is a 24-hour/day cable-TV broadcast which presents top news stories continuously throughout the day. ABC-WNT is a daily 30-minute news broadcast that typically covers about a dozen different news items. Each daily ABC-WNT broadcast and up to four 30-minute sections of CNN-HDL were recorded each day. The CNN segments were drawn from that portion of the daily schedule that happened to include closed captioning.

2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News is distributed on one hard drive.

2012 Subscription Members will automatically receive one copy of this data.  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$6000.

*

(2) 2009 HYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T03'CoNLLHYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T03' Shared Task Part 1 contains the Catalan, Czech, German and Spanish trial corpora, training corpora, development and test data for the 2009 HYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/'CoNLLHYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/' (Conference on Computational Natural Language Learning) Shared Task Evaluation. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates.

The Conference on Computational Natural Language Learning (HYPERLINK 'http://www.cnts.ua.ac.be/conll/'CoNLLHYPERLINK 'http://www.cnts.ua.ac.be/conll/') is accompanied every year by a shared task intended to promote natural language processing applications and evaluate them in a standard setting. In 2008, the shared task focused on English and employed a unified dependency-based formalism and merged the task of syntactic dependency parsing and the task of identifying semantic arguments and labeling them with semantic roles; that data has been released by LDC as 2008 CoNLL Shared Task Data (LDC2009T12). The 2009 task extended the 2008 task to several languages (English plus Catalan, Chinese, Czech, German, Japanese and Spanish). Among the new features were comparison of time and space complexity based on participants' input, and learning curve comparison for languages with large datasets.

The 2009 shared task was divided into two subtasks:

(1) parsing syntactic dependencies

(2) identification of arguments and assignment of semantic roles for each predicate

The materials in this release consist of excerpts from the following corpora:

 

Ancora (Spanish + Catalan): 500,000 words each of annotated news text developed by the University of Barcelona, Polytechnic University of Catalonia, the University of Alacante and the University of the Basque Country

 

Prague Dependency Treebank 2.0 (Czech): approximately 2 million words of annotated news, journal and magazine text developed by Charles University; also available through LDC, LDC2006T01

 

TIGER Treebank + SALSA Corpus (German): approximately 900,000 words of annotated news text and FrameNet annotation developed by the University of Potsdam, Saarland University and the University of Stuttgart

2009 CoNLL Shared Task Part 1 is distributed on one DVD-ROM.

2012 Subscription Members will automatically receive two copies of this data.  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$200. 

*

(3) 2009 HYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T04'CoNLLHYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T04' Shared Task Part 2 contains the Chinese and English trial corpora, training corpora, development and test data for the 2009 HYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/'CoNLLHYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/' (Conference on Computational Natural Language Learning) Shared Task Evaluation. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates.

The materials in this release consist of excerpts from the following corpora:

 

Penn Treebank II(LDC95T7) (English): over one million words of annotated English newswire and other text developed by the University of Pennsylvania

 

PropBank(LDC2004T14) (English): semantic annotation of newswire text from Treebank-2 developed by the University of Pennsylvania

 

NomBank(LDC2008T23) (English): argument structure for instances of common nouns in Treebank-2 and Treebank-3 (LDC99T42) texts developed by New York University

 

Chinese Treebank 6.0(LDC2007T36)(Chinese): 780,000 words (over 1.28 million characters) of annotated Chinese newswire, magazine and administrative texts and transcripts from various broadcast news programs developed by the University of Pennsylvania and the University of Colorado

 

Chinese Proposition Bank 2.0(LDC2008T07) (Chinese): predicate-argument annotation on 500,000 words from Chinese Treebank 6.0 developed by the University of Pennsylvania and the University of Colorado

2009 CoNLL Shared Task Part 2 is distributed on one CD-ROM.

2012 Subscription Members will automatically receive two copies of this data.  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$850.

*

(4) USC-SFI MALACH Interviews and Transcripts English was developed by The University of Southern California's Shoah Foundation Institute (USC-SFI), the University of Maryland, IBM and Johns Hopkins University as part of the MALACH (Multilingual Access to Large Spoken HYPERLINK 'http://malach.umiacs.umd.edu/'ArCHivesHYPERLINK 'http://malach.umiacs.umd.edu/') Project. It contains approximately 375 hours of interviews from 784 interviewees along with transcripts and other documentation.

Inspired by his experience making Schindler's List, Steven Spielberg established the Survivors of the Shoah Visual History Foundation in 1994 to gather video testimonies from survivors and other witnesses of the Holocaust. While most of those who gave testimony were Jewish survivors, the Foundation also interviewed homosexual survivors, Jehovah's Witness survivors, liberators and liberation witnesses, political prisoners, rescuers and aid providers, Roma and Sinti (Gypsy) survivors, survivors of eugenics policies, and war crimes trials participants.  In 2006, the Foundation became part of the Dana and David Dornsife College of Letters, Arts and Sciences at the University of Southern California in Los Angeles and was renamed as the USC Shoah Foundation Institute for Visual History and Education.

The goal of the MALACH project was to develop methods for improved access to large multinational spoken archives; the focus was advancing the state of the art of automatic speech recognition (ASR) and information retrieval. The characteristics of the USC-SFI collection -- unconstrained, natural speech filled with disfluencies, heavy accents, age-related co-articulations, un-cued speaker and language switching and emotional speech -- were considered well-suited for that task. The work centered on five languages: English, Czech, Russian, Polish and Slovak. USC-SFI MALACH Interviews and Transcripts English was developed for the English speech recognition experiments.

The speech data in this release was collected beginning in 1994 under a wide variety of conditions ranging from quiet to noisy (e.g., airplane over-flights, wind noise, background conversations and highway noise). Approximately 25,000 of all USC-SFI collected interviews are in English and average approximately 2.5 hours each. The 784 interviews included in this release are each a 30 minute section of the corresponding larger interview. The interviews include accented speech over a wide range (e.g., Hungarian, Italian, Yiddish, German and Polish).

This release includes transcripts of the first 15 minutes of each interview. The transcripts were created using Transcriber 1.5.1 and later modified.

USC-SFI MALACH Interviews and Transcripts English is distributed on five DVD-ROM.

2012 Subscription Members will automatically receive two copies of this data provided that they have submitted a completed copy of the User License Agreement for USC-SFI MALACH Interviews and Transcripts English (LDC2012S05).  2012 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$2000.

Back  Top

5-2-3Speechocean January 2012 update

Speechocean - Language Resource Catalogue - New Released (01- 2012)

Speechocean, as a global provider of language resources and data services, has more than 200 large-scale databases available in 80+ languages and accents covering the fields of Text to Speech, Automatic Speech Recognition, Text, Machine Translation, Web Search, Videos, Images etc.

 

Speechocean is glad to announce that more Speech Resources has been released:

 

Chinese and English Mixing Speech Synthesis Database (Female)

The Chinese Mandarin TTS Speech Corpus contains the read speech of a native Chinese Female professional broadcaster recorded in a studio with high SNR (>35dB) over two channels (AKG C4000B microphone and Electroglottography (EGG) sensor). 
The Corpus includes the following categories:
1.    Basic Mandarin sub-corpus: including 5,000 utterances which were carefully designed considering all kinds of linguistic phenomena. All sentences were declarative and extracted from News channels of People's Daily, China Daily, etc. The prompts with negative words were carefully excluded. ONLY suitable length sentences were accepted (7~20 words, in average 14 words). This sub-corpus can be used for R&D of HMM-based TTS, Limit domain TTS and Small-scale concatenative TTS;
2.    Complementary Mandarin sub-corpus: including 10,000 utterances which were carefully designed considering all kinds of linguistic phenomena. All sentences were declarative and extracted from News channels of People's Daily, China Daily, etc. The prompts with negative words are carefully excluded. ONLY suitable length sentences were accepted (7~20 words, average 14 words). This sub-corpus is a complementary corpus for Basic Mandarin sub-corpus and can be used for R&D of Large-scale concatenative TTS;
3.    Mandarin Neutral sub-corpus: including 380 Chinese bi-syllable words which embedded in carrier sentences;
4.    Mandarin ERHUA sub-corpus: including 290 Chinese Erhua syllables which embedded in carrier sentences;
5.    Mandarin Digit-String sub-corpus: including 1250 utterances with 3-digit length which considered the different pronunciation of 1, i.e. “yi1” and “yao1”.
6.    Mandarin Question sub-corpus: including 300 question sentences with common used question mark, for example “吗”, “么”, “呢”, and etc.;
7.    Mandarin exclamatory sub-corpus: including 200 exclamatory sentences with common used exclamatory mark, for example “呀”, “啊”, “吧”, “啦”, and etc.;
8.    Chinese English sentence sub-corpus: including 1,000 sentences which were carefully designed considering bi-phone coverage. All sentences were extracted from News channels of Voice of America (VOA), and etc. The prompts with negative words are carefully excluded. ONLY suitable length sentences were accepted (7~20 words, in average 12 words) and phonetically annotated with SAMPA. This sub-corpus can be used for R&D of HMM-based TTS, Limit domain TTS and Small-scale concatenative TTS;
9.    Chinese English words sub-corpus: including about 6,000 commonly used English words which embedded in carrier sentence;
10.    Chinese English Abbreviation sub-corpus: including about 200 utterances which considered not only the alphabet coverage, but also the combination of character and digit, such as “MP4”;
11.    Chinese English Letter sub-corpus: including 26 carrier utterances with each letter embedded in the Beginning, Middle and End;
12.    Chinese Greek Letter sub-corpus: including 24 carrier utterances with each letter embedded in the Beginning, Middle and End.

All speech data are segmented and labeled on phone level. Pronunciation lexicon and pitch extract from EEG can also be provided based on demands.

 

France French Speech Recognition Corpus (desktop) – 50 speakers

This France French desktop speech recognition database was collected by SpeechOcean in France. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

UK English Speech Recognition Corpus (desktop) – 50 speakers

This UK English desktop speech recognition database was collected by SpeechOcean in England. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

US English Speech Recognition Corpus (desktop) – 50 speakers

This US English desktop speech recognition database was collected by SpeechOcean in America. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (25 males, 25 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

Italian Speech Recognition Corpus (desktop) – 50 speakers

This Italian desktop speech recognition database was collected by SpeechOcean in Italy. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. 

It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (23 males, 27 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information.

A pronunciation lexicon with a phonemic transcription in SAMPA is also included.

 

For more information about our Database and Services please visit our website www.Speechocen.com or visit our on-line Catalogue at http://www.speechocean.com/en-Product-Catalogue/Index.html

If you have any inquiry regarding our databases and service please feel free to contact us:

Xianfeng Cheng mailto: Chengxianfeng@speechocean.com

Marta Gherardi mailto: Marta@speechocean.com

 

 

Back  Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Back  Top

5-3 Software
5-3-1Matlab toolbox for glottal analysis

I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:

 

http://tcts.fpms.ac.be/~drugman/Toolbox/

 

This toolbox includes the following modules:

 

- Pitch and voiced-unvoiced decision estimation

- Speech polarity detection

- Glottal Closure Instant determination

- Glottal flow estimation

 

By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”:

http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf

 

where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.

 

Hoping that this might be useful to you, and to see you soon,

 

Thomas Drugman

Back  Top

5-3-2ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-3VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA