ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #183  »  Resources

ISCApad #183

Wednesday, September 11, 2013 by Chris Wellekens

5 Resources
5-1 Books
5-1-1G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing

'Audiovisual
Speech Processing' édité par G. Bailly, P. Perrier & E. Vatikiotis-Batesonn chez
Cambridge University Press ?

'When we speak, we configure the vocal tract which shapes the visible motions of the face
and the patterning of the audible speech acoustics. Similarly, we use these visible and
audible behaviors to perceive speech. This book showcases a broad range of research
investigating how these two types of signals are used in spoken communication, how they
interact, and how they can be used to enhance the realistic synthesis and recognition of
audible and visible speech. The volume begins by addressing two important questions about
human audiovisual performance: how auditory and visual signals combine to access the
mental lexicon and where in the brain this and related processes take place. It then
turns to the production and perception of multimodal speech and how structures are
coordinated within and across the two modalities. Finally, the book presents overviews
and recent developments in machine-based speech recognition and synthesis of AV speech. '


Back  Top

5-1-2Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.): Speech Planning and Dynamics, Publisher P.Lang

Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.)

Speech Planning and Dynamics

Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2012. 277 pp., 50 fig., 8 tables

Speech Production and Perception. Vol. 1

Edited by Susanne Fuchs and Pascal Perrier

Imprimé :

ISBN 978-3-631-61479-2 hb.

SFR 60.00 / €* 52.95 / €** 54.50 / € 49.50 / £ 39.60 / US$ 64.95

eBook :

ISBN 978-3-653-01438-9

SFR 63.20 / €* 58.91 / €** 59.40 / € 49.50 / £ 39.60 / US$ 64.95

Commander en ligne : www.peterlang.com

Back  Top

5-1-3Video archive of Odyssey Speaker and Language Recognition Workshop, Singapore 2012
Odyssey Speaker and Language Recognition Workshop 2012, the workshop of ISCA SIG Speaker and Language Characterization, was held in Singapore on 25-28 June 2012. Odyssey 2012 is glad to announce that its video recordings have been included in the ISCA Video Archive. http://www.isca-speech.org/iscaweb/index.php/archive/video-archive
Back  Top

5-1-4Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley

Techniques for Noise Robustness in Automatic Speech Recognition
Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors)
ISBN: 978-1-1199-7088-0
Publisher: Wiley

Automatic speech recognition (ASR) systems are finding increasing use in everyday life. Many of the commonplace environments where the systems are used are noisy, for example users calling up a voice search system from a busy cafeteria or a street. This can result in degraded speech recordings and adversely affect the performance of speech recognition systems. As the use of ASR systems increases, knowledge of the state-of-the-art in techniques to deal with such problems becomes critical to system and application engineers and researchers who work with or on ASR technologies. This book presents a comprehensive survey of the state-of-the-art in techniques used to improve the robustness of speech recognition systems to these degrading external influences.

Key features:

*Reviews all the main noise robust ASR approaches, including signal separation, voice activity detection, robust feature extraction, model compensation and adaptation, missing data techniques and recognition of reverberant speech.
*Acts as a timely exposition of the topic in light of more widespread use in the future of ASR technology in challenging environments.
*Addresses robustness issues and signal degradation which are both key requirements for practitioners of ASR.
*Includes contributions from top ASR researchers from leading research units in the field

Back  Top

5-1-5Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication

Understanding Prosody: The Role of Context, Function and Communication

Ed. by Niebuhr, Oliver

Series:Language, Context and Cognition 13,   De Gruyter

http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238

The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.

 

Back  Top

5-1-6Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p)
 Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p). 
Sommaire : 
Avant –propos, Introduction, ;
 Ch.1 : Eléments de définition ; 
 Ch 2. Situation de la prosodie dans le champ des sciences du langage et dans l’étude de la communication ; 
Ch 3. La prosodie sur les deux versants de la communication orale interindividuelle (production et compréhension) ; 
Ch 4. La prosodie et le cerveau ;
 Ch 5. La matérialité de la prosodie ; 
Ch 6. Les niveau d’analyse et de représentation de la prosodie ; 
Ch 7. Les théories, les modèles de la prosodie et leurs appareils formels ;
 Ch 8 La fonctionnalité plurielle de la prosodie ; 
Ch 9. Les relations de la prosodie avec les sens ; 
Epilogue. 
Suggestions de lecture ;
 Index des termes ; 
Index des noms propres.
Back  Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2013-05)

 *****************************************************************
    ELRA - Language Resources Catalogue - Update  May 2013
    *****************************************************************
   
    We are happy to announce that 10 new Pronunciation Dictionaries and     1 new Evaluation Package are now available in our catalogue. 
   
    The GlobalPhone Pronunciation Dictionaries:     GlobalPhone is a multilingual speech and text database collected at     Karlsruhe University, Germany. The GlobalPhone pronunciation     dictionaries contain the pronunciations of all word forms found in     the transcription data of the GlobalPhone speech & text     database. The pronunciation dictionaries are currently available in     10 languages: Arabic (29230 entries/27059 words), Bulgarian (20193     entries), Czech (33049 entries/32942 words), French (36837     entries/20710 words), German (48979 entries/46035 words), Hausa     (42662 entries/42079 words), Japanese (18094 entries), Polish (36484     entries), Portuguese (Brazilian) (54146 entries/54130 words) and     Swedish (about 25000 entries). Other 8 languages will also be     released: Chinese-Mandarin, Croatian, Korean, Russian, Spanish     (Latin American), Thai, Turkish, and Vietnamese.
   
    Special prices are offered for a combined purchase of several     GlobalPhone languages.
   
    Available GlobalPhone Pronuncation Dictionaries are listed below     (click on the links for further details):
    ELRA-S0340 GlobalPhone French Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1197
    ELRA-S0341 GlobalPhone German Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1198
    ELRA-S0348 GlobalPhone Japanese Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1199
    ELRA-S0350 GlobalPhone Arabic Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1200
    ELRA-S0351 GlobalPhone Bulgarian Pronunciation Dictionary
   
For more information, see: http://catalog.elra.info/product_info.php?products_id=1201
    ELRA-S0352 GlobalPhone Czech Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1202
    ELRA-S0353 GlobalPhone Hausa Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1203
    ELRA-S0354 GlobalPhone Polish Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1204
    ELRA-S0355 GlobalPhone Portuguese (Brazilian) Pronunciation       Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1205
    ELRA-S0356 GlobalPhone Swedish Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1206
    ______________________
    ELRA-E0041 CHIL 2007+ Evaluation Package
    The CHIL Seminars are scientific presentations given by students,     faculty members or invited speakers in the field of multimodal     interfaces and speech processing. The language is European English     spoken by non native speakers. The recordings comprise the     following: videos of the speaker and the audience from 4 fixed     cameras, frontal close ups of the speaker, close talking and     far-field microphone data of the speaker’s voice and background     sounds.
    The CHIL 2007+ Evaluation Package includes: 1) CHIL 2007 Evaluation     Package (see ELRA-E0033) and 2) additional annotations which have     been created within the scope of the Metanet4u Project (ICT PSP No     270893), sponsored by the European Commission.
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1196
   
   
    For more information on the catalogue, please contact Valérie     Mapelli mailto:mapelli@elda.org
   
    Visit our On-line Catalogue: http://catalog.elra.info
    Visit the Universal Catalogue: http://universal.elra.info
    Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html
   

Back  Top

5-2-2ELRA releases free Language Resources

ELRA releases free Language Resources
***************************************************

Anticipating users’ expectations, ELRA has decided to offer a large number of resources for free for Academic research use. Such an offer consists of several sets of speech, text and multimodal resources that are regularly released, for free, as soon as legal aspects are cleared. A first set was released in May 2012 at the occasion of LREC 2012. A second set is now being released.

Whenever this is permitted by our licences, please feel free to use these resources for deriving new resources and depositing them with the ELRA catalogue for community re-use.

Over the last decade, ELRA has compiled a large list of resources into its Catalogue of LRs. ELRA has negotiated distribution rights with the LR owners and made such resources available under fair conditions and within a clear legal framework. Following this initiative, ELRA has also worked on LR discovery and identification with a dedicated team which investigated and listed existing and valuable resources in its 'Universal Catalogue', a list of resources that could be negotiated on a case-by-case basis. At LREC 2010, ELRA introduced the LRE Map, an inventory of LRs, whether developed or used, that were described in LREC papers. This huge inventory listed by the authors themselves constitutes the first 'community-built' catalogue of existing or emerging resources, constantly enriched and updated at major conferences.

Considering the latest trends on easing the sharing of LRs, from both legal and commercial points of view, ELRA is taking a major role in META-SHARE, a large European open infrastructure for sharing LRs. This infrastructure will allow LR owners, providers and distributors to distribute their LRs through an additional and cost-effective channel.

To obtain the available sets of LRs, please visit the web page below and follow the instructions given online:
http://www.elra.info/Free-LRs,26.html

Back  Top

5-2-3LDC Newsletter (August 2013)

 

In this                 newsletter:

-              Mixer 6 now available!  -
         
          -         
           Fall 2013 LDC Data           Scholarship Program - deadline approaching!  -

   

-           LDC at Interspeech 2013,           Lyon France  -

   

New publications:

   

-          GALE Phase 2 Chinese         Broadcast Conversation Parallel Text Part 2   -
     
      -  MADCAT Phase 3 Training Set  -

   

Mixer 6 Speech  -

   


   

Mixer
            6 now available!

       

      The release of Mixer 6 Speech this month marks the first time in       close to a decade that LDC has made available a large-scale speech       training data collection. Representing more than 15,000 hours of       speech from over 500 speakers, Mixer 6 follows in the footsteps of       the Switchboard and Fisher studies by providing a large database       of rich telephone conversations with the addition of subject       interviews and transcript readings.  Participants were native       American English speakers local to the Philadelphia area,       providing further scope for a variety of research tasks.  Mixer 6       Speech is a members-only release and a great reason to join the       consortium. In addition to this substantial resource, members        enjoy rights to other data released in 2013 and can license older       publications at reduced fees. Please see the full
        description
of Mixer 6 Speech.

   

 

   

Fall         2013 LDC Data Scholarship Program - deadline approaching!

   

The deadline for the Fall 2013 LDC Data       Scholarship Program is one month away!   Student applications are       being accepted now through September
        16, 2013, 11:59PM EST.  The LDC Data       Scholarship program provides university students with access to       LDC data at no cost.  This program is open to students pursuing       both undergraduate and graduate studies in an accredited college       or university. LDC Data Scholarships are not restricted to any       particular field of study; however, students must demonstrate a       well-developed research agenda and a bona fide inability to pay.       
     
      Students will need to complete an application which consists of a       data use proposal and letter of support from their adviser.  For       further information on application materials and program rules,       please visit the LDC Data         Scholarship page. 

   

Students can email their applications to the LDC Data         Scholarship program. Decisions will be sent by email from       the same address.

   

      LDC at           Interspeech 2013, Lyon France

   

LDC will once         again be exhibiting at Interspeech held this year August 25-29 in Lyon.         Please stop by LDC’s booth to to learn about recent developments         at the Consortium, including new publications.

   

Also, be on         the lookout for the following presentations:

      

·                 Speech Activity Detection on YouTube Using Deep Neural Networks      

   

     

o   Neville           Ryant, Mark Liberman, Jiahong Yuan (all LDC)

     

o   Monday           26 August, Poster 6,  16.00
          – 18.00

     

o   Room:           Forum 6
       

   

   


     

   

·         The         Spectral Dynamics of Vowels in Mandarin Chinese

   

     

o   Jiahong           Yuan

     

o   Tuesday           27 August, Oral 17, 14.00 – 16.00

     

o   Room:           Gratte-Ciel 3
         
       

   

   

·                 Automatic Phonetic Segmentation using Boundary Models

   

     

o   Jiahong           Yuan (LDC), Neville Ryant (LDC), Mark Liberman (LDC), Andrea           Stolcke, Vikramjit Mitre, Wen Wang

     

o             Wednesday 28 August, Oral 32, 14.00 – 16.00

     

o   Room:           Gratte-Ciel 3

   

   

LDC will         continue to post conference updates via our Facebook page. We hope to see you there!

   

New
          publications:

   

 

       

(1)GALE Phase
        2 Chinese Broadcast Conversation Parallel Text Part 2
was       developed by LDC. Along with other corpora, the parallel text in       this release comprised training data for Phase 2 of the DARPA GALE       (Global Autonomous Language Exploitation) Program. This corpus       contains Chinese source text and corresponding English       translations selected from broadcast conversation (BC) data       collected by LDC in 2005-2007 and transcribed by LDC or under its       direction.

   

This release includes 20 source-translation       document pairs, comprising 152,894 characters of Chinese source       text and its English translation. Data is drawn from six distinct       Chinese programs broadcast in 2005-2007 from Phoenix TV, a Hong       Kong-based satellite television station. Broadcast conversation       programming is generally more interactive than traditional news       broadcasts and includes talk shows, interviews, call-in programs       and roundtable discussions. The programs in this release focus on       current events topics.

   

The data was transcribed by LDC staff and/or       transcription vendors under contract to LDC in accordance with       Quick Rich Transcription guidelines developed by LDC. Transcribers       indicated sentence boundaries in addition to transcribing the       text. Data was manually selected for translation according to       several criteria, including linguistic features, transcription       features and topic features. The transcribed and segmented files       were then reformatted into a human-readable translation format and       assigned to translation vendors. Translators followed LDC's       Chinese to English translation guidelines. Bilingual LDC staff       performed quality control procedures on the completed       translations.

   

GALE Phase 2 Chinese Broadcast Conversation       Parallel Text Part 2 is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc. 2013 Standard Members may       request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$1750.

   

*

   

(2)MADCAT (Multilingual
        Automatic Document Classification Analysis and Translation)         Phase 3 Training Set
contains all training data created by       LDC to support Phase 3 of the DARPA MADCAT Program. The data in       this release consists of handwritten Arabic documents, scanned at       high resolution and annotated for the physical coordinates of each       line and token. Digital transcripts and English translations of       each document are also provided, with the various content and       annotation layers integrated in a single MADCAT XML output.

   

The goal of the MADCAT program is to       automatically convert foreign text images into English       transcripts. MADCAT Phase 3 data was collected from Arabic source       documents in three genres: newswire, weblog and newsgroup text.       Arabic speaking scribes copied documents by hand, following       specific instructions on writing style (fast, normal, careful),       writing implement (pen, pencil) and paper (lined, unlined). Prior       to assignment, source documents were processed to optimize their       appearance for the handwriting task, which resulted in some       original source documents being broken into multiple pages for       handwriting. Each resulting handwritten page was assigned to up to       five independent scribes, using different writing conditions.

   

The handwritten, transcribed documents were       next checked for quality and completeness, then each page was       scanned at a high resolution (600 dpi, greyscale) to create a       digital version of the handwritten document. The scanned images       were then annotated to indicate the physical coordinates of each       line and token. Explicit reading order was also labeled, along       with any errors produced by the scribes when copying the text.

   

The final step was to produce a unified data       format that takes multiple data streams and generates a single       MADCAT XML output file which contains all required information.       The resulting madcat.xml file contains distinct components: a text       layer that consists of the source text, tokenization and sentence       segmentation; an image layer that consists of bounding boxes; a       scribe demographic layer that consists of scribe ID and partition       (train/test); and a document metadata layer.

   

This release includes 4,540 annotation files in       both GEDI XML and MADCAT XML formats (gedi.xml and madcat.xml)       along with their corresponding scanned image files in TIFF format.      
     
      MADCAT Phase 3 Training Set is distributed on one DVD-ROM.
     
      2013 Subscription Members will automatically receive two copies of       this data. 2013 Standard Members may request a copy as part of       their 16 free membership corpora.  Non-members may license this       data for US$1000.
     
     
   

   

*

   

(3)Mixer 6
        Speech
was developed by LDC and is comprised of 15,863 hours       of telephone speech, interviews and transcript readings from 594       distinct native English speakers. This material was collected by       LDC in 2009 and 2010 as part of the Mixer project, specifically       phase 6, the focus of which was on native American English       speakers local to the Philadelphia area.

   

The speech data in this release was collected       by LDC at its Human Subjects Collection facilities in       Philadelphia. The telephone collection protocol was similar to       other LDC telephone studies (e.g., Switchboard-2 Phase III Audio -       LDC2002S06): recruited
      speakers were connected through a robot operator to carry on       casual conversations lasting up to 10 minutes, usually about a       daily topic announced by the robot operator at the start of the       call. The raw digital audio content for each call side was       captured as a separate channel, and each full conversation was       presented as a 2-channel interleaved audio file, with 8000       samples/second and u-law sample encoding. Each speaker was asked       to complete 15 calls.

   

The multi-microphone portion of the collection       utilized 14 distinct microphones installed identically in two       mutli-channel audio recording rooms at LDC. Each session was       guided by collection staff using prompting and recording software       to conduct the following activities: (1) repeat questions (less       than one minute), (2) informal conversation (typically 15       minutes), (3) transcript reading (approximately 15 minutes) and       (4) telephone call (generally 10 minutes). Speakers recorded up to       three 45-minute sessions on distinct days. The 14 channels were       recorded synchronously into separate single-channel files, using       16-bit PCM sample encoding at 16000 samples/second.

   

The recordings in this corpus were used in NIST       Speaker Recognition Evaluation (SRE) test sets for 2010 and 2012.       Researchers interested in applying those benchmark test sets       should consult the respective NIST Evaluation Plans       for guidelines on allowable training data for those tests.

   

The collection contains 4,410 recordings made       via the public telephone network and 1,425 sessions of multiple       microphone recordings in office-room settings. The telephone       recordings are presented as 8-KHz 2-channel NIST SPHERE files, and       the microphone recordings are 16-KHz 1-channel flac/ms-wav files.    

   

Mixer 6 Speech is distributed on one hard       drive.
     
      2013 Subscription Members will automatically receive one copy of       this data on hard drive.  2013 Standard Members may request a copy       as part of their 16 free membership corpora. As a Members-Only       release, Mixer 6 Speech is not available for non-member licensing.


             

   

.

 

 

Back  Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Back  Top

5-2-5OFROM 1er corpus de français de Suisse romande
Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions. 
 
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom
Back  Top

5-2-6Real-world 16-channel noise recordings

We are happy to announce the release of DEMAND, a set of real-world
16-channel noise recordings designed for the evaluation of microphone
array processing techniques.

http://www.irisa.fr/metiss/DEMAND/

1.5 h of noise data were recorded in 18 different indoor and outdoor
environments and are available under the terms of the Creative Commons Attribution-ShareAlike License.

Joachim Thiemann (CNRS - IRISA)
Nobutaka Ito (University of Tokyo)
Emmanuel Vincent (Inria Nancy - Grand Est)

Back  Top

5-3 Software
5-3-1ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-2VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top

5-3-3Voice analysis toolkit
After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit 
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
 
thanks for this.
John
 
--
Researcher
 
Phonetics and Speech Laboratory (Room 4074) Arts Block,
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2
Phone:    (+353) 1 896 1348 Website:  http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php
Check out our workshop!! http://muster.ucd.ie/workshops/iast/
Back  Top

5-3-4Bob signal-processing and machine learning toolbox (v.1.2..0)


    The release 1.2.0 of the Bob
      signal-processing and machine learning toolbox
is available .

    Bob provides both efficient implementations of several machine     learning algorithms as well as a framework to help researchers to     publish reproducible research.
   
   

It is developed by the Biometrics
        Group
at Idiap in       Switzerland.

   
    The previous release of Bob was providing:
    * image, video and audio IO
      interfaces
such as jpg, avi, wav, 

    * database
      accessors
such as FRGC, Labelled Face in the Wild, and many     others,

    *       image processing: Local Binary Patterns (LBPs), Gabor Jets,     SIFT,
    * machines
      and trainers
such as Support Vector Machines (SVMs), k-Means,     Gaussian Mixture Models (GMMs), Inter-Session Variability modeling     (ISV), Joint Factor Analysis (JFA), Probabilistic Linear     Discriminant Analysis (PLDA), Bayesian intra/extra (personal)     classifier,

   
    The new release of Bob has brought the following features and/or     improvements, such as:
    * Unified implementation of Local Binary Patterns (LBPs),
    * Histograms of Oriented Gradients (HOG) implementation,
    * Total variability (i-vector) implementation,
    * Conjugate gradient based-implementation for logistic regression,
    * Improved multi-layer perceptrons implementation (Back-propagation     can now be easily used in combination with any optimizer -- i.e     L-BFGS),
    * Pseudo-inverse-based method for Linear Discriminant Analysis,
    * Covariance-based method for Principal Component Analysis,
    * Whitening and within-class covariance normalization techniques,
    * Module for object detection and keypoint localization     (bob.visioner),
    * Module for       audio processing including feature extraction such as LFCC and     MFCC,
    * Improved extensions (satellite packages), that now support both     Python and C++ code, within an easy to use framework,
    * Improved documentation and add new tutorials,
    * Support for Intel's MKL (in addition to ATLAS),
    * Extend supported platforms (Arch Linux).
   
    This release represents a major milestone in Bob with plenty of     functionality improvements (>640       commits in total) and plenty of bug
      fixes
.

    • Sources and       Documentation
    • Binary packages:
    •     Ubuntu: 10.04, 12.04, 12.10 and 13.04
    • For     Mac OSX: works with 10.6 (Snow Leopard), 10.7 (Lion) and 10.8     (Mountain Lion)
   
    For instructions on how to install pre-packaged version on Ubuntu or     OSX, consult our quick       installation instructions  (N.B. OS X macport has not yet been     upgraded. This will be done very soon. cf. https://trac.macports.org/ticket/39831 ).
   
   
    Best regards,
    Elie Khoury (on Behalf of the Biometric
      Group at Idiap
lead by Sebastien
      Marcel
)

   
     
     ---    

-- ------------------- Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
Back  Top

5-3-5An open-source repository of advanced speech processing algorithms called COVAREP
CALL for contributions
======================
 
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
 
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
 
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
 
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
 
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
 
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
  • Only published work can be added to the repository
  • The code must be available as open source
  • Algorithms should be coded in Matlab, however we strongly encourage authors to make the code compatible with Octave in order to maximize usability
  • Contributions have to comply with a Coding Convention (see GitHub site for coding convention and template). However, only for normalizing the inputs/outputs and the documentation. There is no restriction for the content of the functions (though, comments are obviously encouraged).
 
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
 
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
 
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
 
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA