ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #179  »  Resources

ISCApad #179

Friday, May 10, 2013 by Chris Wellekens

5 Resources
5-1 Books
5-1-1G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing

'Audiovisual
Speech Processing' édité par G. Bailly, P. Perrier & E. Vatikiotis-Batesonn chez
Cambridge University Press ?

'When we speak, we configure the vocal tract which shapes the visible motions of the face
and the patterning of the audible speech acoustics. Similarly, we use these visible and
audible behaviors to perceive speech. This book showcases a broad range of research
investigating how these two types of signals are used in spoken communication, how they
interact, and how they can be used to enhance the realistic synthesis and recognition of
audible and visible speech. The volume begins by addressing two important questions about
human audiovisual performance: how auditory and visual signals combine to access the
mental lexicon and where in the brain this and related processes take place. It then
turns to the production and perception of multimodal speech and how structures are
coordinated within and across the two modalities. Finally, the book presents overviews
and recent developments in machine-based speech recognition and synthesis of AV speech. '


Back  Top

5-1-2Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.): Speech Planning and Dynamics, Publisher P.Lang

Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.)

Speech Planning and Dynamics

Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2012. 277 pp., 50 fig., 8 tables

Speech Production and Perception. Vol. 1

Edited by Susanne Fuchs and Pascal Perrier

Imprimé :

ISBN 978-3-631-61479-2 hb.

SFR 60.00 / €* 52.95 / €** 54.50 / € 49.50 / £ 39.60 / US$ 64.95

eBook :

ISBN 978-3-653-01438-9

SFR 63.20 / €* 58.91 / €** 59.40 / € 49.50 / £ 39.60 / US$ 64.95

Commander en ligne : www.peterlang.com

Back  Top

5-1-3Video archive of Odyssey Speaker and Language Recognition Workshop, Singapore 2012
Odyssey Speaker and Language Recognition Workshop 2012, the workshop of ISCA SIG Speaker and Language Characterization, was held in Singapore on 25-28 June 2012. Odyssey 2012 is glad to announce that its video recordings have been included in the ISCA Video Archive. http://www.isca-speech.org/iscaweb/index.php/archive/video-archive
Back  Top

5-1-4Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley

Techniques for Noise Robustness in Automatic Speech Recognition
Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors)
ISBN: 978-1-1199-7088-0
Publisher: Wiley

Automatic speech recognition (ASR) systems are finding increasing use in everyday life. Many of the commonplace environments where the systems are used are noisy, for example users calling up a voice search system from a busy cafeteria or a street. This can result in degraded speech recordings and adversely affect the performance of speech recognition systems. As the use of ASR systems increases, knowledge of the state-of-the-art in techniques to deal with such problems becomes critical to system and application engineers and researchers who work with or on ASR technologies. This book presents a comprehensive survey of the state-of-the-art in techniques used to improve the robustness of speech recognition systems to these degrading external influences.

Key features:

*Reviews all the main noise robust ASR approaches, including signal separation, voice activity detection, robust feature extraction, model compensation and adaptation, missing data techniques and recognition of reverberant speech.
*Acts as a timely exposition of the topic in light of more widespread use in the future of ASR technology in challenging environments.
*Addresses robustness issues and signal degradation which are both key requirements for practitioners of ASR.
*Includes contributions from top ASR researchers from leading research units in the field

Back  Top

5-1-5Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication

Understanding Prosody: The Role of Context, Function and Communication

Ed. by Niebuhr, Oliver

Series:Language, Context and Cognition 13,   De Gruyter

http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238

The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.

 

Back  Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2013-03)

 ELRA - Language Resources Catalogue - Update
    *****************************************************************
   
    ELRA is happy to announce that QUAERO Structured Named Entity     Language Resources are now available in its catalogue.
    A Written Corpus and a Broadcast Resource annotated with Structured     Named Entities from the QUAERO Programme are now being released     (free for academic research):
   
    ELRA-W0073 Quaero Old Press Extended Named Entity corpus
    This corpus consists of the manual annotation of 76 newspaper issues     published in 1890-1891 and provided by the French National Library     (Bibliothèque Nationale de France). Three different titles are used     (Le Temps, La Croix and Le Figaro) for a total of 295 pages. The     corpus is fully manually annotated according to the Quaero extended     and structured named entity definition.
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1194&language=en
   
    ELRA-S0349 Quaero Broadcast News Extended Named Entity corpus
    This corpus consists of the manual annotation of (i) the ESTER 2     (see also ELRA-S0338) manual transcription corpus and (ii) the     Quaero Speech Recognition Evaluation corpus (manual and automatic     transcriptions coming from 3 different ASR systems). The corpus is     fully manually annotated according to the Quaero extended and     structured named entity definition.
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1195&language=en
   
    These two corpora are described in :
    S. Rosset, C. Grouin, K. Fort, O. Galibert, J. Kahn, P. Zweigenbaum.     Structured Named Entities in two distinct press corpora:     Contemporary Broadcast News and Old Newspapers. In Proc. of LAW VI,     2012.
   
    QUAERO is a research and innovation program adressing automatic     processing of multimedia and multilingual content aiming at the     development of new tools for navigating in large volumes of text and     audiovisual content. The research and development undertaken covers     automatic information retrieval, analysis, segmentation and     classification of text, speech, music, image and video. The program,     supported by OSEO, gathers 32 French and German partners -- large     groups, small and medium size enterprises, research laboratories and     public organizations.  The program consists of a number of     application projects aiming at industrial targets and markets that     are supported by a common shared research  structure. Real world     data sets (corpora) are used to define the evaluation tasks and to     conduct the research challenges between partners. The use of     systematic periodic technology evaluation allows to assess progress     made and to  select the most promising technical and scientific     approaches. After nearly five years of existence, Quaero is a very     active eco-system that has produced in excess of 700 scientific     publications, more than 25 awards, numerous top 3 rankings  in     technology evaluation campaigns, 31 patent applications and many     innovative prototypes.
   
    To find out more about QUAERO, please visit the following website: http://www.quaero.org
   
    For more information on the catalogue, please contact Valérie     Mapelli mailto:mapelli@elda.org
   
    Visit our On-line Catalogue: http://catalog.elra.info
    Visit the Universal Catalogue: http://universal.elra.info
    Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html

 

 

Back  Top

5-2-2ELRA releases free Language Resources

ELRA releases free Language Resources
***************************************************

Anticipating users’ expectations, ELRA has decided to offer a large number of resources for free for Academic research use. Such an offer consists of several sets of speech, text and multimodal resources that are regularly released, for free, as soon as legal aspects are cleared. A first set was released in May 2012 at the occasion of LREC 2012. A second set is now being released.

Whenever this is permitted by our licences, please feel free to use these resources for deriving new resources and depositing them with the ELRA catalogue for community re-use.

Over the last decade, ELRA has compiled a large list of resources into its Catalogue of LRs. ELRA has negotiated distribution rights with the LR owners and made such resources available under fair conditions and within a clear legal framework. Following this initiative, ELRA has also worked on LR discovery and identification with a dedicated team which investigated and listed existing and valuable resources in its 'Universal Catalogue', a list of resources that could be negotiated on a case-by-case basis. At LREC 2010, ELRA introduced the LRE Map, an inventory of LRs, whether developed or used, that were described in LREC papers. This huge inventory listed by the authors themselves constitutes the first 'community-built' catalogue of existing or emerging resources, constantly enriched and updated at major conferences.

Considering the latest trends on easing the sharing of LRs, from both legal and commercial points of view, ELRA is taking a major role in META-SHARE, a large European open infrastructure for sharing LRs. This infrastructure will allow LR owners, providers and distributors to distribute their LRs through an additional and cost-effective channel.

To obtain the available sets of LRs, please visit the web page below and follow the instructions given online:
http://www.elra.info/Free-LRs,26.html

Back  Top

5-2-3LDC Newsletter (April 2013)

 

 

In this newsletter:
     
      -   Checking in with LDC Data           Scholarship Recipients  -
     

      New publications:
     
      -   GALE Phase 2 Chinese Broadcast           Conversation Speech  -
     

     
-   GALE Phase 2 Chinese           Broadcast Conversation Transcripts  -
     

     
-   NIST 2008-2012 Open Machine           Translation (OpenMT) Progress Test Sets        -
     
      ***
      -   Reprint of March 2013 Newsletter          -

   

     

   

  Checking
        in with LDC Data Scholarship Recipients

   

      The LDC Data Scholarship program provides college and university       students with access to LDC data at no-cost. Students are asked to       complete an application which consists of a proposal describing       their intended use of the data, as well as a letter of support       from their thesis adviser. LDC introduced the Data Scholarship       program during the Fall 2010 semester.  Since that time, more than       thirty individual students and student research groups have been       awarded no-cost copies of LDC data for their research endeavors.        Here is an update on the work of a few of the student recipients:

   

         
  • Leili Javadpour - Louisiana State University (USA),         Engineering Science.  Leili was awarded a copy of BBN           Pronoun Coreference and Entity Type Corpus (LDC2005T33)         and Message Understanding Conference (MUC) 7         (LDC2001T02) for her work in pronominal anaphora resolution.         Leili's research involves a learning approach for pronominal         anaphora resolution in unstructured text. She evaluated her         approach on the BBN Pronoun Coreference and Entity Type           Corpus and obtained encouraging results of 89%.  In this         approach machine learning is applied to a set of new features         selected from other computational linguistic research.  Leili's         future plans involve evaluating the approach on Message           Understanding Conference (MUC) 7 as well as on other         genres of annotated text such as stories and conversation         transcripts.
  •    

   

         
  • Olga Nickolaevna Ladoshko - National Technical         University of Ukraine “KPI” (Ukraine), graduate student,         Acoustics and Acoustoelectronics. Olga was awarded copies of  NTIMT         (LDC93S2) and STC-TIMIT 1.0 (LDC2008S03) for her         research in automatic speech recognition for Ukrainian. Olga         used NTIMIT in the first phase of her research; one         problem she investigated was the influence of telephone         communication channels on the reliability of phoneme recognition         in different types of parametrization and configuration speech         recognition systems on the basis of HTK tools. The second phase         involves using NTIMIT to test the algorithm for         determining voice in non-stationary noise.  Her future work with         STC-TIMIT 1.0 will include an experiment to develop an         improved speech recognition algorithm, allowing for increased         accuracy under noisy conditions.
  •      
  •        

     Genevieve Sapijaszko -           University of Central Florida (USA), Phd Candidate, Electrical           and Computer Engineering.  Genevieve           was awarded a copy TIMIT Acoustic-Phonetic Continuous             Speech Corpus (LDC93S1) and YOHO Speaker             Verification (LDC94S16) for her work in digital signal           processing.  Her           experiment used VQ and Euclidean distance to recognize a           speaker's identity through extracting the features of the           speech signal by the following methods: RCC, MFCC, MFCC +           ΔMFCC, LPC, LPCC, PLPCC and RASTA PLPCC. Based on the results,           in a noise free environment MFCC, (at an average of 94%), is           the best feature extraction method when used in conjunction           with the VQ model. The addition of the ΔMFCC showed no           significant improvement to the recognition rate. When           comparing three phrases of differing length, the longer two           phrases had very similar recognition rates but the shorter           phrase at 0.5 seconds had a noticeable lower recognition rate           across methods. When comparing recognition time, MFCC was also           faster than other methods. Genevieve and her research team           concluded that MFCC in a noise free environment was the best           method in terms of recognition rate and recognition rate time.

         
  •      
  •        

    John Steinberg -  Temple           University (USA), MS candidate, Electrical and Computer           Engineering.  John was awarded a copy of CALLHOME Mandarin             Chinese Lexicon (LDC96L15) and CALLHOME Mandarin             Chinese Transcripts (LDC96T16) for his work in speech           recognition. John used the CALLHOME Mandarin Lexicon           and Transcripts to investigate the integration of           Bayesian nonparametric techniques into speech recognition           systems. These techniques are able to detect the underlying           structure of the data and theoretically generate better           acoustic models than typical parametric approaches such as           HMM. His work investigated using one such model, Dirichlet           process mixtures, in conjunction with three variational           Bayesian inference algorithms for acoustic modeling. The scope           of his work was limited to a phoneme classification problem           since John's goal was to determine the viability of these           algorithms for acoustic modeling.

         
  •    

   

One goal of his research group is to develop a speech       recognition system that is robust to variations in the acoustic       channel. The group is also interested in building acoustic models       that generalize well across languages. For these reasons, both CALLHOME         English and CALLHOME Mandarin data were used to help       determine if these new Bayesian nonparametric models were prone to       any language specific artifacts. These two languages, though       phonetically very different, did not yield significantly different       performances. Furthermore, one variational inference algorithm-       accelerated variational Dirichlet process mixtures (AVDPM) - was       found to perform well on extremely large data sets.

   

   

   

New publications
   

   

     

   

(1) GALE Phase
        2 Chinese Broadcast Conversation Speech
(LDC2013S04) was       developed by LDC and is comprised of approximately 120 hours of       Chinese broadcast conversation speech collected in 2006 and 2007       by LDC and Hong University of Science and Technology (HKUST), Hong       Kong, during Phase 2 of the DARPA GALE (Global Autonomous Language       Exploitation) Program.

   

Corresponding transcripts are released as GALE       Phase 2 Chinese Broadcast Conversation Transcripts (LDC2013T08).

   

Broadcast audio for the GALE program was       collected at the Philadelphia, PA USA facilities of LDC and at       three remote collection sites: HKUST (Chinese) Medianet, Tunis,       Tunisia (Arabic) and MTC, Rabat, Morocco (Arabic). The combined       local and outsourced broadcast collection supported GALE at a rate       of approximately 300 hours per week of programming from more than       50 broadcast sources for a total of over 30,000 hours of collected       broadcast audio over the life of the program.

   

The broadcast conversation recordings in this       release feature interviews, call-in programs and roundtable       discussions focusing principally on current events from the       following sources: Anhui TV, a regional television station in       Mainland China, Anhui Province; China Central TV (CCTV), a       national and international broadcaster in Mainland China; Hubei       TV, a regional broadcaster in Mainland China, Hubei Province; and       Phoenix TV, a Hong Kong-based satellite television station. A       table showing the number of programs and hours recorded from each       source is contained in the readme file.

   

This release contains 202 audio files presented       in Waveform Audio File format (.wav), 16000 Hz single-channel       16-bit PCM. Each file was audited by a native Chinese speaker       following Audit Procedure Specification Version 2.0 which is       included in this release. The broadcast auditing process served       three principal goals: as a check on the operation of the       broadcast collection system equipment by identifying failed,       incomplete or faulty recordings; as an indicator of broadcast       schedule changes by identifying instances when the incorrect       program was recorded; and as a guide for data selection by       retaining information about the genre, data type and topic of a       program.

   

GALE Phase 2 Chinese Broadcast Conversation       Speech is distributed on 4 DVD-ROM. 2013 Subscription Members will       automatically receive two copies of this data. 2013 Standard       Members may request a copy as part of their 16 free membership       corpora.  Non-members may license this data for US$2000.

   

*

   

      (2) GALE Phase
        2 Chinese Broadcast Conversation Transcripts
(LDC2013T08)       was developed by LDC and contains transcriptions of approximately       120 hours of Chinese broadcast conversation speech collected in       2006 and 2007 by LDC and Hong University of Science and Technology       (HKUST), Hong Kong, during Phase 2 of the DARPA GALE (Global       Autonomous Language Exploitation) Program.

   

Corresponding audio data is released as GALE       Phase 2 Chinese Broadcast Conversation Speech (LDC2013S04).

   

The source broadcast conversation recordings       feature interviews, call-in programs and round table discussions       focusing principally on current events from the following sources:       Anhui TV, a regional television station in Mainland China, Anhui       Province; China Central TV (CCTV), a national and international       broadcaster in Mainland China; Hubei TV, a regional broadcaster in       Mainland China, Hubei Province; and Phoenix TV, a Hong Kong-based       satellite television station.

   

The transcript files are in plain-text,       tab-delimited format (TDF) with UTF-8 encoding, and the       transcribed data totals 1,523,373 tokens. The transcripts were       created with the LDC-developed transcription tool, XTrans,       a multi-platform, multilingual, multi-channel transcription tool       that supports manual transcription and annotation of audio       recordings.

   

The files in this corpus were transcribed by       LDC staff and/or by transcription vendors under contract to LDC.       Transcribers followed LDC’s quick transcription guidelines (QTR)       and quick rich transcription specification (QRTR) both of which       are included in the documentation with this release. QTR       transcription consists of quick (near-)verbatim, time-aligned       transcripts plus speaker identification with minimal additional       mark-up. It does not include sentence unit annotation. QRTR       annotation adds structural information such as topic boundaries       and manual sentence unit annotation to the core components of a       quick transcript. Files with QTR as part of the filename were       developed using QTR transcription. Files with QRTR in the filename       indicate QRTR transcription.

   

GALE Phase 2 Chinese Broadcast Conversation       Transcripts is distributed via web download. 2013 Subscription       Members will automatically receive two copies of this data on       disc. 2013 Standard Members may request a copy as part of their 16       free membership corpora.  Non-members may license this data for       US$1500.

   

*

   

(3) NIST 2008-2012
        Open Machine Translation (OpenMT) Progress Test Sets
      (LDC2013T07) was developed by NIST Multimodal Information         Group. This release contains the evaluation sets (source       data and human reference translations), DTD, scoring software, and       evaluation plans for the Arabic-to-English and Chinese-to-English       progress test sets for the NIST OpenMT 2008, 2009, and 2012       evaluations. The test data remained unseen between evaluations and       was reused unchanged each time. The package was compiled, and       scoring software was developed, at NIST, making use of Chinese and       Arabic newswire and web data and reference translations collected       and developed by LDC.

   

The objective of the OpenMT evaluation series       is to support research in, and help advance the state of the art       of, machine translation (MT) technologies -- technologies that       translate text between human languages. Input may include all       forms of text. The goal is for the output to be an adequate and       fluent translation of the original.

   

The MT evaluation series started in 2001 as       part of the DARPA TIDES (Translingual Information Detection,       Extraction) program. Beginning with the 2006 evaluation, the       evaluations have been driven and coordinated by NIST as NIST       OpenMT. These evaluations provide an important contribution to the       direction of research efforts and the calibration of technical       capabilities in MT. The OpenMT evaluations are intended to be of       interest to all researchers working on the general problem of       automatic translation between human languages. To this end, they       are designed to be simple, to focus on core technology issues and       to be fully supported. For more general information about the NIST       OpenMT evaluations, please refer to the NIST OpenMT         website.

   

This evaluation kit includes a single Perl       script (mteval-v13a.pl) that may be used to produce a translation       quality score for one (or more) MT systems. The script works by       comparing the system output translation with a set of (expert)       reference translations of the same source text. Comparison is       based on finding sequences of words in the reference translations       that match word sequences in the system output translation.

   

This release contains 2,748 documents with       corresponding source and reference files, the latter of which       contains four independent human reference translations of the       source data. The source data is comprised of Arabic and Chinese       newswire and web data collected by LDC in 2007. The table below       displays statistics by source, genre, documents, segments and       source tokens.

   

                                                                                                                                                                                                                                                                                                                                                         
           

Source

         
           

Genre

         
           

Documents

         
           

Segments

         
           

Source Tokens

         
           

Arabic

         
           

Newswire

         
           

84

         
           

784

         
           

20039

         
           

Arabic

         
           

Web Data

         
           

51

         
           

594

         
           

14793

         
           

Chinese

         
           

Newswire

         
           

82

         
           

688

         
           

26923

         
           

Chinese

         
           

Web Data

         
           

40

         
           

682

         
           

19112

         

   

 

   

NIST 2008-2012 Open Machine Translation       (OpenMT) Progress Test Sets is distributed via web download. 2013       Subscription Members will automatically receive two copies of this       data on disc. 2013 Standard Members may request a copy as part of       their 16 free membership corpora.  Non-members may license this       data for US$150.

   

 

   

***

   

Reprint         of March 2013 Newsletter

   

LDC's March 2013 newsletter may not have       reached all intended recipients and is being reprinted below.

   

     

   

LDC’s 20th Anniversary:         Concluding a Year of Celebration

   

      We’ve enjoyed celebrating our 20th Anniversary this last year       (April 2012 - March 2013) and would like to review some highlights       before its close.
     
      Our 2012 User Survey, circulated early in 2012, included a special       Anniversary section in which respondents were asked to reflect on       their opinions of, and dealings with, LDC over the years. We were       humbled by the response. Multiple users mentioned that they would       not be able to conduct their research without LDC and its data.       For a full list of survey testimonials, please click here.
     
      LDC also developed its first-ever timeline        (initially published in the April 2012 Newsletter) marking       significant milestones in the consortium’s founding and growth.
     
      In September, we hosted a 20th Anniversary
        Workshop
  that brought together many friends and       collaborators to discuss the present and future of language       resources.
     
      Throughout the year, we conducted several interviews of long-time       LDC staff members to document their unique recollections of LDC       history and to solicit their opinions on the future of the       Consortium. These interviews are available as podcasts on the LDC         Blog
     
      As our Anniversary year draws to a close, one task remains – to       thank all of LDC’s past, present and future members and other       friends of the Consortium for their loyalty and for their       contributions to the community. LDC would not exist if not for its       supporters.  The variety of relationships that LDC has built over       the years is a direct reflection of the vitality, strength and       diversity of the community. We thank you all and hope that we       continue to serve your needs in our third decade and beyond.
     
      For a last treat, please visit LDC’s newly-launched YouTube       channel to enjoy this video montage       of the LDC staff interviews featured in the podcast series.
     
      Thank you again for your continued support!

   

New publications

   

(1) 1993-2007 United
        Nations Parallel Text
was developed by Google Research. It       consists of United Nations (UN) parliamentary documents from 1993       through 2007 in the official languages of the UN: Arabic, Chinese,       English, French, Russian, and Spanish.

   

UN parliamentary documents are available from       the UN Official Document System (UN         ODS). UN ODS, in its main UNDOC database, contains the full       text of all types of UN parliamentary documents. It has complete       coverage datng from 1993 and variable coverage before that.       Documents exist in one or more of the official languages of the       UN: Arabic, Chinese, English, French, Russian, and Spanish. UN ODS       also contains a large number of German documents, marked with the       language other, but these are not included in this dataset.

   

LDC has released parallel UN parliamentary       documents in English, French and Spanish spanning the period       1988-1993, UN Parallel
        Text (Complete) (LDC94T4A)
.

   

The data is presented as raw text and       word-aligned text. There are 673,670 raw text documents and       520,283 word aligned documents. The raw text is very close to what       was extracted from the original word processing documents in UN       ODS (e.g., Word, WordPerfect, PDF), converted to UTF-8 encoding.       The word-aligned text was normalized, tokenized, aligned at the       sentence-level, further broken into sub-sentential chunk-pairs,       and then aligned at the word. The sentence, chunk, and word       alignment operations were performed separately for each individual       language pair.

   

1993-2007 United Nations Parallel Text is       distributed on 3 DVD-ROM.

   

2013 Subscription Members will automatically       receive two copies of this data provided they have completed the UN         Parallel Text Corpus User Agreement.  2013 Standard Members       may request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$175.

   

*

   

(2) GALE Chinese-English
        Word Alignment and Tagging Training Part 4 -- Web
was       developed by LDC and contains 158,387 tokens of word aligned       Chinese and English parallel text enriched with linguistic tags.       This material was used as training data in the DARPA GALE       (Global Autonomous Language Exploitation) program.

   

Some approaches to statistical machine       translation include the incorporation of linguistic knowledge in       word aligned text as a means to improve automatic word alignment       and machine translation quality. This is accomplished with two       annotation schemes: alignment and tagging. Alignment identifies       minimum translation units and translation relations by using       minimum-match and attachment annotation approaches. A set of word       tags and alignment link tags are designed in the tagging scheme to       describe these translation units and relations. Tagging adds       contextual, syntactic and language-specific features to the       alignment annotation.

   

This release consists of Chinese source web       data (newsgroup, weblog) collected by LDC between 2005-2010. The       distribution by words, character tokens and segments appears       below:

   

                                                                                                                                                   
           

Language

         
           

Files

         
           

Words

         
           

CharTokens

         
           

Segments

         
           

Chinese

         
           

1,224

         
           

105,591

         
           

158,387

         
           

4,836

         

   

      Note that all token counts are based on the Chinese data only. One       token is equivalent to one character and one word is equivalent to       1.5 characters.

   

The Chinese word alignment tasks consisted of       the following components:

   

Identifying, aligning, and tagging 8 different       types of links

   

Identifying, attaching, and tagging local-level       unmatched words

   

Identifying and tagging       sentence/discourse-level unmatched words

   

Identifying and tagging all instances of       Chinese 的(DE) except
      when they were a part of a semantic link.

   

GALE Chinese-English Word Alignment and Tagging       Training Part 4 -- Web is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc.  2013 Standard Members       may request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$1750.

 

 

 

Back  Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Back  Top

5-2-5OFROM 1er corpus de français de Suisse romande
Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions. 
 
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom
Back  Top

5-3 Software
5-3-1Matlab toolbox for glottal analysis

I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:

 

http://tcts.fpms.ac.be/~drugman/Toolbox/

 

This toolbox includes the following modules:

 

- Pitch and voiced-unvoiced decision estimation

- Speech polarity detection

- Glottal Closure Instant determination

- Glottal flow estimation

 

By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”:

http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf

 

where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.

 

Hoping that this might be useful to you, and to see you soon,

 

Thomas Drugman

Back  Top

5-3-2ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-3VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top

5-3-4Voice analysis toolkit
After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit 
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
 
thanks for this.
John
 
--
Researcher
 
Phonetics and Speech Laboratory (Room 4074) Arts Block,
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2
Phone:    (+353) 1 896 1348 Website:  http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php
Check out our workshop!! http://muster.ucd.ie/workshops/iast/
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA