ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #180  »  Resources

ISCApad #180

Monday, June 10, 2013 by Chris Wellekens

5 Resources
5-1 Books
5-1-1G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing

'Audiovisual
Speech Processing' édité par G. Bailly, P. Perrier & E. Vatikiotis-Batesonn chez
Cambridge University Press ?

'When we speak, we configure the vocal tract which shapes the visible motions of the face
and the patterning of the audible speech acoustics. Similarly, we use these visible and
audible behaviors to perceive speech. This book showcases a broad range of research
investigating how these two types of signals are used in spoken communication, how they
interact, and how they can be used to enhance the realistic synthesis and recognition of
audible and visible speech. The volume begins by addressing two important questions about
human audiovisual performance: how auditory and visual signals combine to access the
mental lexicon and where in the brain this and related processes take place. It then
turns to the production and perception of multimodal speech and how structures are
coordinated within and across the two modalities. Finally, the book presents overviews
and recent developments in machine-based speech recognition and synthesis of AV speech. '


Back  Top

5-1-2Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.): Speech Planning and Dynamics, Publisher P.Lang

Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.)

Speech Planning and Dynamics

Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2012. 277 pp., 50 fig., 8 tables

Speech Production and Perception. Vol. 1

Edited by Susanne Fuchs and Pascal Perrier

Imprimé :

ISBN 978-3-631-61479-2 hb.

SFR 60.00 / €* 52.95 / €** 54.50 / € 49.50 / £ 39.60 / US$ 64.95

eBook :

ISBN 978-3-653-01438-9

SFR 63.20 / €* 58.91 / €** 59.40 / € 49.50 / £ 39.60 / US$ 64.95

Commander en ligne : www.peterlang.com

Back  Top

5-1-3Video archive of Odyssey Speaker and Language Recognition Workshop, Singapore 2012
Odyssey Speaker and Language Recognition Workshop 2012, the workshop of ISCA SIG Speaker and Language Characterization, was held in Singapore on 25-28 June 2012. Odyssey 2012 is glad to announce that its video recordings have been included in the ISCA Video Archive. http://www.isca-speech.org/iscaweb/index.php/archive/video-archive
Back  Top

5-1-4Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley

Techniques for Noise Robustness in Automatic Speech Recognition
Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors)
ISBN: 978-1-1199-7088-0
Publisher: Wiley

Automatic speech recognition (ASR) systems are finding increasing use in everyday life. Many of the commonplace environments where the systems are used are noisy, for example users calling up a voice search system from a busy cafeteria or a street. This can result in degraded speech recordings and adversely affect the performance of speech recognition systems. As the use of ASR systems increases, knowledge of the state-of-the-art in techniques to deal with such problems becomes critical to system and application engineers and researchers who work with or on ASR technologies. This book presents a comprehensive survey of the state-of-the-art in techniques used to improve the robustness of speech recognition systems to these degrading external influences.

Key features:

*Reviews all the main noise robust ASR approaches, including signal separation, voice activity detection, robust feature extraction, model compensation and adaptation, missing data techniques and recognition of reverberant speech.
*Acts as a timely exposition of the topic in light of more widespread use in the future of ASR technology in challenging environments.
*Addresses robustness issues and signal degradation which are both key requirements for practitioners of ASR.
*Includes contributions from top ASR researchers from leading research units in the field

Back  Top

5-1-5Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication

Understanding Prosody: The Role of Context, Function and Communication

Ed. by Niebuhr, Oliver

Series:Language, Context and Cognition 13,   De Gruyter

http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238

The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.

 

Back  Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2013-05)

 *****************************************************************
    ELRA - Language Resources Catalogue - Update  May 2013
    *****************************************************************
   
    We are happy to announce that 10 new Pronunciation Dictionaries and     1 new Evaluation Package are now available in our catalogue. 
   
    The GlobalPhone Pronunciation Dictionaries:     GlobalPhone is a multilingual speech and text database collected at     Karlsruhe University, Germany. The GlobalPhone pronunciation     dictionaries contain the pronunciations of all word forms found in     the transcription data of the GlobalPhone speech & text     database. The pronunciation dictionaries are currently available in     10 languages: Arabic (29230 entries/27059 words), Bulgarian (20193     entries), Czech (33049 entries/32942 words), French (36837     entries/20710 words), German (48979 entries/46035 words), Hausa     (42662 entries/42079 words), Japanese (18094 entries), Polish (36484     entries), Portuguese (Brazilian) (54146 entries/54130 words) and     Swedish (about 25000 entries). Other 8 languages will also be     released: Chinese-Mandarin, Croatian, Korean, Russian, Spanish     (Latin American), Thai, Turkish, and Vietnamese.
   
    Special prices are offered for a combined purchase of several     GlobalPhone languages.
   
    Available GlobalPhone Pronuncation Dictionaries are listed below     (click on the links for further details):
    ELRA-S0340 GlobalPhone French Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1197
    ELRA-S0341 GlobalPhone German Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1198
    ELRA-S0348 GlobalPhone Japanese Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1199
    ELRA-S0350 GlobalPhone Arabic Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1200
    ELRA-S0351 GlobalPhone Bulgarian Pronunciation Dictionary
   
For more information, see: http://catalog.elra.info/product_info.php?products_id=1201
    ELRA-S0352 GlobalPhone Czech Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1202
    ELRA-S0353 GlobalPhone Hausa Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1203
    ELRA-S0354 GlobalPhone Polish Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1204
    ELRA-S0355 GlobalPhone Portuguese (Brazilian) Pronunciation       Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1205
    ELRA-S0356 GlobalPhone Swedish Pronunciation Dictionary
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1206
    ______________________
    ELRA-E0041 CHIL 2007+ Evaluation Package
    The CHIL Seminars are scientific presentations given by students,     faculty members or invited speakers in the field of multimodal     interfaces and speech processing. The language is European English     spoken by non native speakers. The recordings comprise the     following: videos of the speaker and the audience from 4 fixed     cameras, frontal close ups of the speaker, close talking and     far-field microphone data of the speaker’s voice and background     sounds.
    The CHIL 2007+ Evaluation Package includes: 1) CHIL 2007 Evaluation     Package (see ELRA-E0033) and 2) additional annotations which have     been created within the scope of the Metanet4u Project (ICT PSP No     270893), sponsored by the European Commission.
    For more information, see: http://catalog.elra.info/product_info.php?products_id=1196
   
   
    For more information on the catalogue, please contact Valérie     Mapelli mailto:mapelli@elda.org
   
    Visit our On-line Catalogue: http://catalog.elra.info
    Visit the Universal Catalogue: http://universal.elra.info
    Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html
   

Back  Top

5-2-2ELRA releases free Language Resources

ELRA releases free Language Resources
***************************************************

Anticipating users’ expectations, ELRA has decided to offer a large number of resources for free for Academic research use. Such an offer consists of several sets of speech, text and multimodal resources that are regularly released, for free, as soon as legal aspects are cleared. A first set was released in May 2012 at the occasion of LREC 2012. A second set is now being released.

Whenever this is permitted by our licences, please feel free to use these resources for deriving new resources and depositing them with the ELRA catalogue for community re-use.

Over the last decade, ELRA has compiled a large list of resources into its Catalogue of LRs. ELRA has negotiated distribution rights with the LR owners and made such resources available under fair conditions and within a clear legal framework. Following this initiative, ELRA has also worked on LR discovery and identification with a dedicated team which investigated and listed existing and valuable resources in its 'Universal Catalogue', a list of resources that could be negotiated on a case-by-case basis. At LREC 2010, ELRA introduced the LRE Map, an inventory of LRs, whether developed or used, that were described in LREC papers. This huge inventory listed by the authors themselves constitutes the first 'community-built' catalogue of existing or emerging resources, constantly enriched and updated at major conferences.

Considering the latest trends on easing the sharing of LRs, from both legal and commercial points of view, ELRA is taking a major role in META-SHARE, a large European open infrastructure for sharing LRs. This infrastructure will allow LR owners, providers and distributors to distribute their LRs through an additional and cost-effective channel.

To obtain the available sets of LRs, please visit the web page below and follow the instructions given online:
http://www.elra.info/Free-LRs,26.html

Back  Top

5-2-3LDC Newsletter (May 2013)

 

 

In this newsletter:
      

LDC
          at ICASSP 2013
  -
     

   

Early
          renewing members save on fees
  -
     

   

Commercial
          use and LDC data
  -
           

      New publications
     

   

GALE
          Arabic-English Parallel Aligned Treebank -- Newswire
          -
     

   

MADCAT
          Phase 2 Training Set
  -

   


   

   

   

LDC at ICASSP 2013

   

LDC will be at ICASSP 2013, the world’s       largest and most comprehensive technical conference focused on       signal processing and its applications. The event will be held       over May 26-31 and we look forward to interacting with members of       this community at our exhibit table and during our poster and       paper presentations:

   

     

Tuesday, May 28, 15:30 - 17:30, Poster Area D        

   

   

     

ARTICULATORY TRAJECTORIES FOR           LARGE-VOCABULARY SPEECH RECOGNITION

   

   

     

Authors: Vikramjit Mitra, Wen Wang, Andreas         Stolcke, Hosung Nam, Colleen Richey, Jiahong Yuan (LDC),         Mark Liberman (LDC)

   

   

 

   

     

Tuesday, May 28, 16:30 - 16:50, Room 2011

   

   

     

SCALE-SPACE EXPANSION OF           ACOUSTIC FEATURES IMPROVES SPEECH EVENT DETECTION

   

   

     

Authors: Neville Ryant, Jiahong Yuan,           Mark Liberman (all LDC)

   

   

 

   

     

Wednesday, May 29, 15:20 - 17:20, Poster Area         D

   

   

     

USING MULTIPLE VERSIONS OF SPEECH INPUT IN           PHONE RECOGNITION

   

   

     

Authors: Mark Liberman (LDC), Jiahong Yuan         (LDC), Andreas Stolcke, Wen Wang, Vikramjit Mitra

   

   

 

   

Please look for LDC’s exhibition at Booth #53       in the Vancouver Convention Centre. We hope to see you there!

   


     

       

Early renewing members save         on fees

   

      To date just over 100 organizations have       joined for Membership Year (MY) 2013.   For the sixth straight year,       LDC's early renewal discount program has resulted in significant       savings for our members.  Organizations
      that renewed membership or joined early for MY2013 saved over       US$50,000! MY 2012 members are still eligible for a 5% discount       when renewing for MY2013. This discount will apply throughout       2013.
     
      Organizations joining LDC can take advantage of membership       benefits including free membership year data as well as discounts       on older LDC corpora.  For-profit members can use most LDC data       for commercial applications.  Please visit our Members
        FAQ
for further information.

   

Commercial use and LDC data

   

      Has your company obtained an LDC database as a       non-member?  For-profit organizations are reminded that an LDC       membership is a pre-requisite for obtaining a commercial license       to almost all LDC databases.  Non-member organizations, including       non-member for-profit organizations, cannot use LDC data to       develop or test products for commercialization, nor can they use       LDC data in any commercial product or for any commercial purpose.        LDC data users should consult corpus-specific license agreements       for limitations on the use of certain corpora. In the case of a       small group of corpora such as American National Corpus (ANC)       Second Release (LDC2005T35), Buckwalter Arabic Morphological       Analyzer Version 2.0 (LDC2004L02), CELEX2 (LDC96L14) and all CSLU       corpora, commercial licenses must be obtained separately from the       owners of the data even if an organization is a for-profit member.

   

     

   

New publications

   

      (1) GALE
        Arabic-English Parallel Aligned Treebank -- Newswire
      (LDC2013T10) was developed by LDC and contains 267,520 tokens of       word aligned Arabic and English parallel text with treebank       annotations. This material was used as training data in the DARPA       GALE  (Global Autonomous Language Exploitation) program. Parallel       aligned treebanks are treebanks annotated with morphological and       syntactic structures aligned at the sentence level and the       sub-sentence level. Such data sets are useful for natural language       processing and related fields, including automatic word alignment       system training and evaluation, transfer-rule extraction, word       sense disambiguation, translation lexicon extraction and cultural       heritage and cross-linguistic studies. With respect to machine       translation system development, parallel aligned treebanks may       improve system performance with enhanced syntactic parsers, better       rules and knowledge about language pairs and reduced word error       rate.

   

In this release, the source Arabic data was       translated into English. Arabic and English treebank annotations       were performed independently. The parallel texts were then word       aligned. The material in this corpus corresponds to the Arabic       treebanked data appearing in Arabic Treebank: Part 3 v 3.2 (LDC2010T08)       (ATB) and to the English treebanked data in English Translation       Treebank: An-Nahar Newswire (LDC2012T02).

   

The source data consists of Arabic newswire       from the Lebanese publication An Nahar collected by LDC in 2002.       All data is encoded as UTF-8. A count of files, words, tokens and       segments is below.

                                                                                                                                                       

           

Language

         
           

Files

         
           

Words

         
           

Tokens

         
           

Segments

         
           

Arabic

         
           

364

         
           

182,351

         
           

267,520

         
           

7,711

         

   

      Note: Word count is based on the untokenized Arabic source and       token count is based on the ATB-tokenized Arabic source.

   

The purpose of the GALE word alignment task was       to find correspondences between words, phrases or groups of words       in a set of parallel texts. Arabic-English word alignment       annotation consisted of the following tasks:

   

     

Identifying different types of links:         translated (correct or incorrect) and not translated (correct or         incorrect)

   

   

     

Identifying sentence segments not suitable         for annotation, e.g., blank segments, incorrectly-segmented         segments, segments with foreign languages

   

   

     

Tagging unmatched words attached to other         words or phrases

   

   

GALE Arabic-English Parallel Aligned Treebank       -- Newswire is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc. 2013 Standard Members may       request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$$1750.
     

   

*

   

(2) MADCAT Phase 2
        Training Set
(LDC2013T09) contains all training data created       by LDC to support Phase 2 of the DARPA MADCAT (Multilingual       Automatic Document Classification Analysis and       Translation)Program. The data in this release consists of       handwritten Arabic documents, scanned at high resolution and       annotated for the physical coordinates of each line and token.       Digital transcripts and English translations of each document are       also provided, with the various content and annotation layers       integrated in a single MADCAT XML output.

   

The goal of the MADCAT program is to       automatically convert foreign text images into English       transcripts. MADCAT Phase 2 data was collected from Arabic source       documents in three genres: newswire, weblog and newsgroup text.       Arabic speaking scribes copied documents by hand, following       specific instructions on writing style (fast, normal, careful),       writing implement (pen, pencil) and paper (lined, unlined). Prior       to assignment, source documents were processed to optimize their       appearance for the handwriting task, which resulted in some       original source documents being broken into multiple pages for       handwriting. Each resulting handwritten page was assigned to up to       five independent scribes, using different writing conditions.

   

The handwritten, transcribed documents were       checked for quality and completeness, then each page was scanned       at a high resolution (600 dpi, greyscale) to create a digital       version of the handwritten document. The scanned images were then       annotated to indicate the physical coordinates of each line and       token. Explicit reading order was also labeled, along with any       errors produced by the scribes when copying the text. The       annotation results in GEDI XML output files (gedi.xml), which       include ground truth annotations and source transcripts

   

The final step was to produce a unified data       format that takes multiple data streams and generates a single       MADCAT XML output file with all required information. The       resulting madcat.xml file has these distinct components: (1) a       text layer that consists of the source text, tokenization and       sentence segmentation, (2)  an
      image layer that consist of bounding boxes, (3) a scribe       demographic layer that consists of scribe ID and partition       (train/test) and (4) a document metadata layer.

   

This release includes 27,814 annotation files       in both GEDI XML and MADCAT XML formats (gedi.xml and madcat.xml)       along with their corresponding scanned image files in TIFF format.

   

MADCAT Phase 2 Training Set is distributed on       six DVD-ROM.

   

2013 Subscription Members will automatically       receive two copies of this data on disc. 2013 Standard Members may       request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$2500.

      

Back  Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Back  Top

5-2-5OFROM 1er corpus de français de Suisse romande
Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions. 
 
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom
Back  Top

5-2-6Real-world 16-channel noise recordings

We are happy to announce the release of DEMAND, a set of real-world
16-channel noise recordings designed for the evaluation of microphone
array processing techniques.

http://www.irisa.fr/metiss/DEMAND/

1.5 h of noise data were recorded in 18 different indoor and outdoor
environments and are available under the terms of the Creative Commons Attribution-ShareAlike License.

Joachim Thiemann (CNRS - IRISA)
Nobutaka Ito (University of Tokyo)
Emmanuel Vincent (Inria Nancy - Grand Est)

Back  Top

5-3 Software
5-3-1Matlab toolbox for glottal analysis

I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:

 

http://tcts.fpms.ac.be/~drugman/Toolbox/

 

This toolbox includes the following modules:

 

- Pitch and voiced-unvoiced decision estimation

- Speech polarity detection

- Glottal Closure Instant determination

- Glottal flow estimation

 

By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”:

http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf

 

where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.

 

Hoping that this might be useful to you, and to see you soon,

 

Thomas Drugman

Back  Top

5-3-2ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-3VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top

5-3-4Voice analysis toolkit
After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit 
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
 
thanks for this.
John
 
--
Researcher
 
Phonetics and Speech Laboratory (Room 4074) Arts Block,
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2
Phone:    (+353) 1 896 1348 Website:  http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php
Check out our workshop!! http://muster.ucd.ie/workshops/iast/
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA