ISCApad #159 |
Wednesday, September 14, 2011 by Chris Wellekens |
5-2-1 | ELRA - Language Resources Catalogue - Update (2011-05)
| ||
5-2-2 | LDC Newsletter (August 2011) In this newsletter: - Fall 2011 LDC Data Scholarship Program - - Checking in with previous LDC Data Scholarship recipients - - Weizmann Institute students are introduced to LDC data - - LDC Exhibiting at Interspeech 2011, Florence Italy - New publications: LDC2011S06 LDC2011S05 LDC2011T09
Fall 2011 LDC Data Scholarship Program
Checking in with previous LDC Data Scholarship recipients
Weizmann Institute students are introduced to LDC data
LDC Exhibiting at Interspeech 2011, Florence Italy
LDC is returning to Europe to participate in Interspeech 2011. The conference will be held from August 28-31 at the Firenze Fiera, conveniently located near the Stazione di Santa Maria Novella. Please stop by LDC’s exhibition booth to say hello and learn more about current happenings at the Consortium. Interspeech 2011’s theme is ‘Speech Science and Technology for Real Life’. You may learn more about the conference here. The main conference will feature keynotes on the following topics: Speaking More Like You: Entrainment in Conversational Speech, Prof. Julia Hirschberg Neural Representations of Word Meanings, Prof. Tom Mitchell Honest Signals, Prof. Sandy Pentland Conference organizers have also scheduled a roundtable discussion for August 31st on ‘Future and Applications of Speech and Language Technologies for the Good Health of Society’ which will be led by Profs. Gabriele Miceli, Björn Granström and Hiroshi Ishiguro. You are encouraged to keep track of LDC’s Interspeech preparations on our Facebook page. We hope to see you there!
New Publications (1) 2005 Spring NIST Rich Transcription (RT-05S) Conference Meeting Evaluation Set was developed by LDC and the National Institute of Standards and Technology (NIST). It contains approximately 78 hours of English meeting speech, reference transcripts and other material used in the RT Spring 2005 evaluation. Rich Transcription (RT) is broadly defined as a fusion of speech-to-text (STT) technology and metadata extraction technologies providing the bases for the generation of more usable transcriptions of human-human speech in meetings. RT-05S included the following tasks in the meeting domain: Speech-To-Text (STT) - convert spoken words into streams of text Speaker Diarization (SPKR) - find the segments of time within a meeting in which each meeting participant is talking Speech Activity Detection (SAD) - detect when someone in a meeting space is talking Further information about the evaluation is available on the RT-05 Spring Evaluation Website. The data in this release consists of portions of meeting speech collected between 2001 and 2005 by the IDIAP Research Institute's Augmented Multi-Party Interaction project (AMI), Martigny, Switzerland; International Computer Science Institute (ICSI) at University of California, Berkeley; Interactive Systems Laboratories (ISL) at Carnegie Mellon University (CMU), Pittsburgh, PA; NIST; and Virginia Polytechnic Institute and State University (VT), Blacksburg, VA. Each meeting excerpt contains a head-mic recording for each subject and one or more distant microphone recordings. Reference transcripts for the evaluation excerpts were prepared by LDC according to its Meeting Recording Careful Transcription Guidelines. Those specifications are designed to provide an accurate, verbatim (word-for-word) transcription, time-aligned with the audio file and including the identification of additional audio and speech signals with special mark-up. 2005 Spring NIST Rich Transcription (RT-05S) Conference Meeting Evaluation Set is distributed on 3 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $2250.
*
(2) 2008 NIST Speaker Recognition Evaluation Training Set Part 1 was developed by LDC and the National Institute of Standards and Technology (NIST). It contains 640 hours of multilingual telephone speech and English interview speech along with transcripts and other materials used as training data in the 2008 NIST Speaker Recognition Evaluation (SRE). SRE is part of an ongoing series of evaluations conducted by NIST. These evaluations are an important contribution to the direction of research efforts and the calibration of technical capabilities. They are intended to be of interest to all researchers working on the general problem of text independent speaker recognition. The 2008 evaluation was distinguished from prior evaluations, in particular those in 2005 and 2006, by including not only conversational telephone speech data but also conversational speech data of comparable duration recorded over a microphone channel involving an interview scenario. The speech data in this release was collected in 2007 by LDC at its Human Subjects Data Collection Laboratories in Philadelphia and by the International Computer Science Institute (ICSI) at the University of California, Berkley. This collection was part of the Mixer 5 project, which was designed to support the development of robust speaker recognition technology by providing carefully collected and audited speech from a large pool of speakers recorded simultaneously across numerous microphones and in different communicative situations and/or in multiple languages. Mixer participants were native English and bilingual English speakers. The telephone speech in this corpus is predominately English; all interview segments are in English. Telephone speech represents approximately 565 hours of the data, where as microphone speech represents the other 75 hours. The telephone speech segments include excerpts in the range of 8-12 seconds and 5 minutes from longer original conversations. The interview material includes short conversation interview segments of approximately 3 minutes from a longer interview session. English language transcripts in .cfm format were produced using an automatic speech recognition (ASR) system. 2008 NIST Speaker Recognition Evaluation Training Set Part 1 is distributed on 9 DVD-ROM. 2011 Subscription Members will automatically receive two copies of this corpus. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $2000.
* (3) Arabic Treebank: Part 2 (ATB2) v 3.1 was developed at LDC. It consists of 501 newswire stories from Ummah Press with part-of-speech (POS), morphology, gloss and syntactic treebank annotation in accordance with the Penn Arabic Treebank (PATB) Guidelines developed in 2008 and 2009. This release represents a significant revision of LDC's previous ATB2 publication: Arabic Treebank: Part 2 v 2.0 LDC2004T02. The ongoing PATB project supports research in Arabic-language natural language processing and human language technology development. The methodology and work leading to the release of this publication are described in detail in the documentation accompanying this corpus and in two research papers: Enhancing the Arabic Treebank: A Collaborative Effort toward New Annotation Guidelines and Consistent and Flexible Integration of Morphological Annotation in the Arabic Treebank. ATB2 v 3.1 contains a total of 144,199 source tokens before clitics are split, and 169,319 tree tokens after clitics are separated for the treebank annotation. Source texts were selected from Ummah Press news archives covering the period from July 2001 through September 2002. Arabic Treebank: Part 2 v 3.1 is distributed via web download. 2011 Subscription Members will automatically receive two copies of this corpus on disc. 2011 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $4500.
| ||
5-2-3 | ELRA receives the META Prize at the META-FORUM 2011 in Budapest Paris, France, July 18, 2011
| ||
5-2-4 | Speechocean China SpeechOcean China also has about 200+ large language resources and some of databases can be freely used to our members for academic research purpose. As a ISCA member, we will be also glad to share these databases to other ISCA members,
|