ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #179  »  Resources  »  Database  »  LDC Newsletter (April 2013)

ISCApad #179

Friday, May 10, 2013 by Chris Wellekens

5-2-3 LDC Newsletter (April 2013)
  

 

 

In this newsletter:
     
      -   Checking in with LDC Data           Scholarship Recipients  -
     

      New publications:
     
      -   GALE Phase 2 Chinese Broadcast           Conversation Speech  -
     

     
-   GALE Phase 2 Chinese           Broadcast Conversation Transcripts  -
     

     
-   NIST 2008-2012 Open Machine           Translation (OpenMT) Progress Test Sets        -
     
      ***
      -   Reprint of March 2013 Newsletter          -

   

     

   

  Checking
        in with LDC Data Scholarship Recipients

   

      The LDC Data Scholarship program provides college and university       students with access to LDC data at no-cost. Students are asked to       complete an application which consists of a proposal describing       their intended use of the data, as well as a letter of support       from their thesis adviser. LDC introduced the Data Scholarship       program during the Fall 2010 semester.  Since that time, more than       thirty individual students and student research groups have been       awarded no-cost copies of LDC data for their research endeavors.        Here is an update on the work of a few of the student recipients:

   

         
  • Leili Javadpour - Louisiana State University (USA),         Engineering Science.  Leili was awarded a copy of BBN           Pronoun Coreference and Entity Type Corpus (LDC2005T33)         and Message Understanding Conference (MUC) 7         (LDC2001T02) for her work in pronominal anaphora resolution.         Leili's research involves a learning approach for pronominal         anaphora resolution in unstructured text. She evaluated her         approach on the BBN Pronoun Coreference and Entity Type           Corpus and obtained encouraging results of 89%.  In this         approach machine learning is applied to a set of new features         selected from other computational linguistic research.  Leili's         future plans involve evaluating the approach on Message           Understanding Conference (MUC) 7 as well as on other         genres of annotated text such as stories and conversation         transcripts.
  •    

   

         
  • Olga Nickolaevna Ladoshko - National Technical         University of Ukraine “KPI” (Ukraine), graduate student,         Acoustics and Acoustoelectronics. Olga was awarded copies of  NTIMT         (LDC93S2) and STC-TIMIT 1.0 (LDC2008S03) for her         research in automatic speech recognition for Ukrainian. Olga         used NTIMIT in the first phase of her research; one         problem she investigated was the influence of telephone         communication channels on the reliability of phoneme recognition         in different types of parametrization and configuration speech         recognition systems on the basis of HTK tools. The second phase         involves using NTIMIT to test the algorithm for         determining voice in non-stationary noise.  Her future work with         STC-TIMIT 1.0 will include an experiment to develop an         improved speech recognition algorithm, allowing for increased         accuracy under noisy conditions.
  •      
  •        

     Genevieve Sapijaszko -           University of Central Florida (USA), Phd Candidate, Electrical           and Computer Engineering.  Genevieve           was awarded a copy TIMIT Acoustic-Phonetic Continuous             Speech Corpus (LDC93S1) and YOHO Speaker             Verification (LDC94S16) for her work in digital signal           processing.  Her           experiment used VQ and Euclidean distance to recognize a           speaker's identity through extracting the features of the           speech signal by the following methods: RCC, MFCC, MFCC +           ΔMFCC, LPC, LPCC, PLPCC and RASTA PLPCC. Based on the results,           in a noise free environment MFCC, (at an average of 94%), is           the best feature extraction method when used in conjunction           with the VQ model. The addition of the ΔMFCC showed no           significant improvement to the recognition rate. When           comparing three phrases of differing length, the longer two           phrases had very similar recognition rates but the shorter           phrase at 0.5 seconds had a noticeable lower recognition rate           across methods. When comparing recognition time, MFCC was also           faster than other methods. Genevieve and her research team           concluded that MFCC in a noise free environment was the best           method in terms of recognition rate and recognition rate time.

         
  •      
  •        

    John Steinberg -  Temple           University (USA), MS candidate, Electrical and Computer           Engineering.  John was awarded a copy of CALLHOME Mandarin             Chinese Lexicon (LDC96L15) and CALLHOME Mandarin             Chinese Transcripts (LDC96T16) for his work in speech           recognition. John used the CALLHOME Mandarin Lexicon           and Transcripts to investigate the integration of           Bayesian nonparametric techniques into speech recognition           systems. These techniques are able to detect the underlying           structure of the data and theoretically generate better           acoustic models than typical parametric approaches such as           HMM. His work investigated using one such model, Dirichlet           process mixtures, in conjunction with three variational           Bayesian inference algorithms for acoustic modeling. The scope           of his work was limited to a phoneme classification problem           since John's goal was to determine the viability of these           algorithms for acoustic modeling.

         
  •    

   

One goal of his research group is to develop a speech       recognition system that is robust to variations in the acoustic       channel. The group is also interested in building acoustic models       that generalize well across languages. For these reasons, both CALLHOME         English and CALLHOME Mandarin data were used to help       determine if these new Bayesian nonparametric models were prone to       any language specific artifacts. These two languages, though       phonetically very different, did not yield significantly different       performances. Furthermore, one variational inference algorithm-       accelerated variational Dirichlet process mixtures (AVDPM) - was       found to perform well on extremely large data sets.

   

   

   

New publications
   

   

     

   

(1) GALE Phase
        2 Chinese Broadcast Conversation Speech
(LDC2013S04) was       developed by LDC and is comprised of approximately 120 hours of       Chinese broadcast conversation speech collected in 2006 and 2007       by LDC and Hong University of Science and Technology (HKUST), Hong       Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language       Exploitation) Program.

   

Corresponding transcripts are released as GALE       Phase 2 Chinese Broadcast Conversation Transcripts (LDC2013T08).

   

Broadcast audio for the GALE program was       collected at the Philadelphia, PA USA facilities of LDC and at       three remote collection sites: HKUST (Chinese) Medianet, Tunis,       Tunisia (Arabic) and MTC, Rabat, Morocco (Arabic). The combined       local and outsourced broadcast collection supported GALE at a rate       of approximately 300 hours per week of programming from more than       50 broadcast sources for a total of over 30,000 hours of collected       broadcast audio over the life of the program.

   

The broadcast conversation recordings in this       release feature interviews, call-in programs and roundtable       discussions focusing principally on current events from the       following sources: Anhui TV, a regional television station in       Mainland China, Anhui Province; China Central TV (CCTV), a       national and international broadcaster in Mainland China; Hubei       TV, a regional broadcaster in Mainland China, Hubei Province; and       Phoenix TV, a Hong Kong-based satellite television station. A       table showing the number of programs and hours recorded from each       source is contained in the readme file.

   

This release contains 202 audio files presented       in Waveform Audio File format (.wav), 16000 Hz single-channel       16-bit PCM. Each file was audited by a native Chinese speaker       following Audit Procedure Specification Version 2.0 which is       included in this release. The broadcast auditing process served       three principal goals: as a check on the operation of the       broadcast collection system equipment by identifying failed,       incomplete or faulty recordings; as an indicator of broadcast       schedule changes by identifying instances when the incorrect       program was recorded; and as a guide for data selection by       retaining information about the genre, data type and topic of a       program.

   

GALE Phase 2 Chinese Broadcast Conversation       Speech is distributed on 4 DVD-ROM. 2013 Subscription Members will       automatically receive two copies of this data. 2013 Standard       Members may request a copy as part of their 16 free membership       corpora.  Non-members may license this data for US$2000.

   

*

   

      (2) GALE Phase
        2 Chinese Broadcast Conversation Transcripts
(LDC2013T08)       was developed by LDC and contains transcriptions of approximately       120 hours of Chinese broadcast conversation speech collected in       2006 and 2007 by LDC and Hong University of Science and Technology       (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global       Autonomous Language Exploitation) Program.

   

Corresponding audio data is released as GALE       Phase 2 Chinese Broadcast Conversation Speech (LDC2013S04).

   

The source broadcast conversation recordings       feature interviews, call-in programs and round table discussions       focusing principally on current events from the following sources:       Anhui TV, a regional television station in Mainland China, Anhui       Province; China Central TV (CCTV), a national and international       broadcaster in Mainland China; Hubei TV, a regional broadcaster in       Mainland China, Hubei Province; and Phoenix TV, a Hong Kong-based       satellite television station.

   

The transcript files are in plain-text,       tab-delimited format (TDF) with UTF-8 encoding, and the       transcribed data totals 1,523,373 tokens. The transcripts were       created with the LDC-developed transcription tool, XTrans,       a multi-platform, multilingual, multi-channel transcription tool       that supports manual transcription and annotation of audio       recordings.

   

The files in this corpus were transcribed by       LDC staff and/or by transcription vendors under contract to LDC.       Transcribers followed LDC’s quick transcription guidelines (QTR)       and quick rich transcription specification (QRTR) both of which       are included in the documentation with this release. QTR       transcription consists of quick (near-)verbatim, time-aligned       transcripts plus speaker identification with minimal additional       mark-up. It does not include sentence unit annotation. QRTR       annotation adds structural information such as topic boundaries       and manual sentence unit annotation to the core components of a       quick transcript. Files with QTR as part of the filename were       developed using QTR transcription. Files with QRTR in the filename       indicate QRTR transcription.

   

GALE Phase 2 Chinese Broadcast Conversation       Transcripts is distributed via web download. 2013 Subscription       Members will automatically receive two copies of this data on       disc. 2013 Standard Members may request a copy as part of their 16       free membership corpora.  Non-members may license this data for       US$1500.

   

*

   

(3) NIST 2008-2012
        Open Machine Translation (OpenMT) Progress Test Sets
      (LDC2013T07) was developed by NIST Multimodal Information         Group. This release contains the evaluation sets (source       data and human reference translations), DTD, scoring software, and       evaluation plans for the Arabic-to-English and Chinese-to-English       progress test sets for the NIST OpenMT 2008, 2009, and 2012       evaluations. The test data remained unseen between evaluations and       was reused unchanged each time. The package was compiled, and       scoring software was developed, at NIST, making use of Chinese and       Arabic newswire and web data and reference translations collected       and developed by LDC.

   

The objective of the OpenMT evaluation series       is to support research in, and help advance the state of the art       of, machine translation (MT) technologies -- technologies that       translate text between human languages. Input may include all       forms of text. The goal is for the output to be an adequate and       fluent translation of the original.

   

The MT evaluation series started in 2001 as       part of the DARPA TIDES (Translingual Information Detection,       Extraction) program. Beginning with the 2006 evaluation, the       evaluations have been driven and coordinated by NIST as NIST       OpenMT. These evaluations provide an important contribution to the       direction of research efforts and the calibration of technical       capabilities in MT. The OpenMT evaluations are intended to be of       interest to all researchers working on the general problem of       automatic translation between human languages. To this end, they       are designed to be simple, to focus on core technology issues and       to be fully supported. For more general information about the NIST       OpenMT evaluations, please refer to the NIST OpenMT         website.

   

This evaluation kit includes a single Perl       script (mteval-v13a.pl) that may be used to produce a translation       quality score for one (or more) MT systems. The script works by       comparing the system output translation with a set of (expert)       reference translations of the same source text. Comparison is       based on finding sequences of words in the reference translations       that match word sequences in the system output translation.

   

This release contains 2,748 documents with       corresponding source and reference files, the latter of which       contains four independent human reference translations of the       source data. The source data is comprised of Arabic and Chinese       newswire and web data collected by LDC in 2007. The table below       displays statistics by source, genre, documents, segments and       source tokens.

   

                                                                                                                                                                                                                                                                                                                                                         
           

Source

         
           

Genre

         
           

Documents

         
           

Segments

         
           

Source Tokens

         
           

Arabic

         
           

Newswire

         
           

84

         
           

784

         
           

20039

         
           

Arabic

         
           

Web Data

         
           

51

         
           

594

         
           

14793

         
           

Chinese

         
           

Newswire

         
           

82

         
           

688

         
           

26923

         
           

Chinese

         
           

Web Data

         
           

40

         
           

682

         
           

19112

         

   

 

   

NIST 2008-2012 Open Machine Translation       (OpenMT) Progress Test Sets is distributed via web download. 2013       Subscription Members will automatically receive two copies of this       data on disc. 2013 Standard Members may request a copy as part of       their 16 free membership corpora.  Non-members may license this       data for US$150.

   

 

   

***

   

Reprint         of March 2013 Newsletter

   

LDC's March 2013 newsletter may not have       reached all intended recipients and is being reprinted below.

   

     

   

LDC’s 20th Anniversary:         Concluding a Year of Celebration

   

      We’ve enjoyed celebrating our 20th Anniversary this last year       (April 2012 - March 2013) and would like to review some highlights       before its close.
     
      Our 2012 User Survey, circulated early in 2012, included a special       Anniversary section in which respondents were asked to reflect on       their opinions of, and dealings with, LDC over the years. We were       humbled by the response. Multiple users mentioned that they would       not be able to conduct their research without LDC and its data.       For a full list of survey testimonials, please click here.
     
      LDC also developed its first-ever timeline        (initially published in the April 2012 Newsletter) marking       significant milestones in the consortium’s founding and growth.
     
      In September, we hosted a 20th Anniversary
        Workshop
  that brought together many friends and       collaborators to discuss the present and future of language       resources.
     
      Throughout the year, we conducted several interviews of long-time       LDC staff members to document their unique recollections of LDC       history and to solicit their opinions on the future of the       Consortium. These interviews are available as podcasts on the LDC         Blog
     
      As our Anniversary year draws to a close, one task remains – to       thank all of LDC’s past, present and future members and other       friends of the Consortium for their loyalty and for their       contributions to the community. LDC would not exist if not for its       supporters.  The variety of relationships that LDC has built over       the years is a direct reflection of the vitality, strength and       diversity of the community. We thank you all and hope that we       continue to serve your needs in our third decade and beyond.
     
      For a last treat, please visit LDC’s newly-launched YouTube       channel to enjoy this video montage       of the LDC staff interviews featured in the podcast series.
     
      Thank you again for your continued support!

   

New publications

   

(1) 1993-2007 United
        Nations Parallel Text
was developed by Google Research. It       consists of United Nations (UN) parliamentary documents from 1993       through 2007 in the official languages of the UN: Arabic, Chinese,       English, French, Russian, and Spanish.

   

UN parliamentary documents are available from       the UN Official Document System (UN         ODS). UN ODS, in its main UNDOC database, contains the full       text of all types of UN parliamentary documents. It has complete       coverage datng from 1993 and variable coverage before that.       Documents exist in one or more of the official languages of the       UN: Arabic, Chinese, English, French, Russian, and Spanish. UN ODS       also contains a large number of German documents, marked with the       language other, but these are not included in this dataset.

   

LDC has released parallel UN parliamentary       documents in English, French and Spanish spanning the period       1988-1993, UN Parallel
        Text (Complete) (LDC94T4A)
.

   

The data is presented as raw text and       word-aligned text. There are 673,670 raw text documents and       520,283 word aligned documents. The raw text is very close to what       was extracted from the original word processing documents in UN       ODS (e.g., Word, WordPerfect, PDF), converted to UTF-8 encoding.       The word-aligned text was normalized, tokenized, aligned at the       sentence-level, further broken into sub-sentential chunk-pairs,       and then aligned at the word. The sentence, chunk, and word       alignment operations were performed separately for each individual       language pair.

   

1993-2007 United Nations Parallel Text is       distributed on 3 DVD-ROM.

   

2013 Subscription Members will automatically       receive two copies of this data provided they have completed the UN         Parallel Text Corpus User Agreement.  2013 Standard Members       may request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$175.

   

*

   

(2) GALE Chinese-English
        Word Alignment and Tagging Training Part 4 -- Web
was       developed by LDC and contains 158,387 tokens of word aligned       Chinese and English parallel text enriched with linguistic tags.       This material was used as training data in the DARPA GALE       (Global Autonomous Language Exploitation) program.

   

Some approaches to statistical machine       translation include the incorporation of linguistic knowledge in       word aligned text as a means to improve automatic word alignment       and machine translation quality. This is accomplished with two       annotation schemes: alignment and tagging. Alignment identifies       minimum translation units and translation relations by using       minimum-match and attachment annotation approaches. A set of word       tags and alignment link tags are designed in the tagging scheme to       describe these translation units and relations. Tagging adds       contextual, syntactic and language-specific features to the       alignment annotation.

   

This release consists of Chinese source web       data (newsgroup, weblog) collected by LDC between 2005-2010. The       distribution by words, character tokens and segments appears       below:

   

                                                                                                                                                   
           

Language

         
           

Files

         
           

Words

         
           

CharTokens

         
           

Segments

         
           

Chinese

         
           

1,224

         
           

105,591

         
           

158,387

         
           

4,836

         

   

      Note that all token counts are based on the Chinese data only. One       token is equivalent to one character and one word is equivalent to       1.5 characters.

   

The Chinese word alignment tasks consisted of       the following components:

   

Identifying, aligning, and tagging 8 different       types of links

   

Identifying, attaching, and tagging local-level       unmatched words

   

Identifying and tagging       sentence/discourse-level unmatched words

   

Identifying and tagging all instances of       Chinese 的(DE) except
      when they were a part of a semantic link.

   

GALE Chinese-English Word Alignment and Tagging       Training Part 4 -- Web is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc.  2013 Standard Members       may request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$1750.

 

 

 


Back  Top


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA