ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #184  »  Resources

ISCApad #184

Friday, October 11, 2013 by Chris Wellekens

5 Resources
5-1 Books
5-1-1G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing

'Audiovisual
Speech Processing' édité par G. Bailly, P. Perrier & E. Vatikiotis-Batesonn chez
Cambridge University Press ?

'When we speak, we configure the vocal tract which shapes the visible motions of the face
and the patterning of the audible speech acoustics. Similarly, we use these visible and
audible behaviors to perceive speech. This book showcases a broad range of research
investigating how these two types of signals are used in spoken communication, how they
interact, and how they can be used to enhance the realistic synthesis and recognition of
audible and visible speech. The volume begins by addressing two important questions about
human audiovisual performance: how auditory and visual signals combine to access the
mental lexicon and where in the brain this and related processes take place. It then
turns to the production and perception of multimodal speech and how structures are
coordinated within and across the two modalities. Finally, the book presents overviews
and recent developments in machine-based speech recognition and synthesis of AV speech. '


Back  Top

5-1-2Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.): Speech Planning and Dynamics, Publisher P.Lang

Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.)

Speech Planning and Dynamics

Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2012. 277 pp., 50 fig., 8 tables

Speech Production and Perception. Vol. 1

Edited by Susanne Fuchs and Pascal Perrier

Imprimé :

ISBN 978-3-631-61479-2 hb.

SFR 60.00 / €* 52.95 / €** 54.50 / € 49.50 / £ 39.60 / US$ 64.95

eBook :

ISBN 978-3-653-01438-9

SFR 63.20 / €* 58.91 / €** 59.40 / € 49.50 / £ 39.60 / US$ 64.95

Commander en ligne : www.peterlang.com

Back  Top

5-1-3Video archive of Odyssey Speaker and Language Recognition Workshop, Singapore 2012
Odyssey Speaker and Language Recognition Workshop 2012, the workshop of ISCA SIG Speaker and Language Characterization, was held in Singapore on 25-28 June 2012. Odyssey 2012 is glad to announce that its video recordings have been included in the ISCA Video Archive. http://www.isca-speech.org/iscaweb/index.php/archive/video-archive
Back  Top

5-1-4Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley

Techniques for Noise Robustness in Automatic Speech Recognition
Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors)
ISBN: 978-1-1199-7088-0
Publisher: Wiley

Automatic speech recognition (ASR) systems are finding increasing use in everyday life. Many of the commonplace environments where the systems are used are noisy, for example users calling up a voice search system from a busy cafeteria or a street. This can result in degraded speech recordings and adversely affect the performance of speech recognition systems. As the use of ASR systems increases, knowledge of the state-of-the-art in techniques to deal with such problems becomes critical to system and application engineers and researchers who work with or on ASR technologies. This book presents a comprehensive survey of the state-of-the-art in techniques used to improve the robustness of speech recognition systems to these degrading external influences.

Key features:

*Reviews all the main noise robust ASR approaches, including signal separation, voice activity detection, robust feature extraction, model compensation and adaptation, missing data techniques and recognition of reverberant speech.
*Acts as a timely exposition of the topic in light of more widespread use in the future of ASR technology in challenging environments.
*Addresses robustness issues and signal degradation which are both key requirements for practitioners of ASR.
*Includes contributions from top ASR researchers from leading research units in the field

Back  Top

5-1-5Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication

Understanding Prosody: The Role of Context, Function and Communication

Ed. by Niebuhr, Oliver

Series:Language, Context and Cognition 13,   De Gruyter

http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238

The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.

 

Back  Top

5-1-6Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p)
 Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p). 
Sommaire : 
Avant –propos, Introduction, ;
 Ch.1 : Eléments de définition ; 
 Ch 2. Situation de la prosodie dans le champ des sciences du langage et dans l’étude de la communication ; 
Ch 3. La prosodie sur les deux versants de la communication orale interindividuelle (production et compréhension) ; 
Ch 4. La prosodie et le cerveau ;
 Ch 5. La matérialité de la prosodie ; 
Ch 6. Les niveau d’analyse et de représentation de la prosodie ; 
Ch 7. Les théories, les modèles de la prosodie et leurs appareils formels ;
 Ch 8 La fonctionnalité plurielle de la prosodie ; 
Ch 9. Les relations de la prosodie avec les sens ; 
Epilogue. 
Suggestions de lecture ;
 Index des termes ; 
Index des noms propres.
Back  Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2013-09))

 *****************************************************************
    ELRA - Language Resources Catalogue - Update  September 2013
    *****************************************************************

We are happy to announce that 5 new Pronunciation Dictionaries       from the GlobalPhone database (Croatian, Russian, Spanish (Latin       American), Turkish and Vietnamese) are now available in our       catalogue. 
     
      The GlobalPhone Pronunciation Dictionaries:       GlobalPhone is a multilingual speech and text database collected       at Karlsruhe University, Germany. The GlobalPhone pronunciation       dictionaries contain the pronunciations of all word forms found in       the transcription data of the GlobalPhone speech & text       database. The pronunciation dictionaries are currently available       in 15 languages: Arabic (29230 entries/27059 words), Bulgarian       (20193 entries), Croatian (23497 entries/20628 words), Czech       (33049 entries/32942 words), French (36837 entries/20710 words),       German (48979 entries/46035 words), Hausa (42662 entries/42079       words), Japanese (18094 entries), Polish (36484 entries),       Portuguese (Brazilian) (54146 entries/54130 words), Russian (28818       entries/27667 words), Spanish (Latin American) (43264       entries/33960 words), Swedish (about 25000 entries), Turkish       (31330 entries/31087 words), and Vietnamese (38504 entries/29974       words). 3 other languages will also be released: Chinese-Mandarin,       Korean and Thai.
     
      *** NEW ***
       
 ELRA-S0358 GlobalPhone Croatian Pronunciation Dictionary

      For more information, see: http://catalog.elra.info/product_info.php?products_id=1207
      ELRA-S0359 GlobalPhone Russian Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1208
      ELRA-S0360 GlobalPhone Spanish (Latin American) Pronunciation         Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1209
      ELRA-S0361 GlobalPhone Turkish Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1210
      ELRA-S0362 GlobalPhone Vietnamese Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1211
     
      Special prices are offered for a combined purchase of several       GlobalPhone languages.
     
      Available GlobalPhone Pronuncation Dictionaries are listed below       (click on the links for further details):
      ELRA-S0340 GlobalPhone French Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1197
      ELRA-S0341 GlobalPhone German Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1198
      ELRA-S0348 GlobalPhone Japanese Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1199
      ELRA-S0350 GlobalPhone Arabic Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1200
      ELRA-S0351 GlobalPhone Bulgarian Pronunciation Dictionary
     
For more information, see: http://catalog.elra.info/product_info.php?products_id=1201

      ELRA-S0352 GlobalPhone Czech Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1202
      ELRA-S0353 GlobalPhone Hausa Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1203
      ELRA-S0354 GlobalPhone Polish Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1204
      ELRA-S0355 GlobalPhone Portuguese (Brazilian) Pronunciation         Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1205
      ELRA-S0356 GlobalPhone Swedish Pronunciation Dictionary
      For more information, see: http://catalog.elra.info/product_info.php?products_id=1206
     
      For more information on the catalogue, please contact Valérie       Mapelli mailto:mapelli@elda.org
     
      Visit our On-line Catalogue: http://catalog.elra.info
      Visit the Universal Catalogue: http://universal.elra.info      
      Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/LRs-Announcements.html
   

        
   

Back  Top

5-2-2ELRA releases free Language Resources

ELRA releases free Language Resources
***************************************************

Anticipating users’ expectations, ELRA has decided to offer a large number of resources for free for Academic research use. Such an offer consists of several sets of speech, text and multimodal resources that are regularly released, for free, as soon as legal aspects are cleared. A first set was released in May 2012 at the occasion of LREC 2012. A second set is now being released.

Whenever this is permitted by our licences, please feel free to use these resources for deriving new resources and depositing them with the ELRA catalogue for community re-use.

Over the last decade, ELRA has compiled a large list of resources into its Catalogue of LRs. ELRA has negotiated distribution rights with the LR owners and made such resources available under fair conditions and within a clear legal framework. Following this initiative, ELRA has also worked on LR discovery and identification with a dedicated team which investigated and listed existing and valuable resources in its 'Universal Catalogue', a list of resources that could be negotiated on a case-by-case basis. At LREC 2010, ELRA introduced the LRE Map, an inventory of LRs, whether developed or used, that were described in LREC papers. This huge inventory listed by the authors themselves constitutes the first 'community-built' catalogue of existing or emerging resources, constantly enriched and updated at major conferences.

Considering the latest trends on easing the sharing of LRs, from both legal and commercial points of view, ELRA is taking a major role in META-SHARE, a large European open infrastructure for sharing LRs. This infrastructure will allow LR owners, providers and distributors to distribute their LRs through an additional and cost-effective channel.

To obtain the available sets of LRs, please visit the web page below and follow the instructions given online:
http://www.elra.info/Free-LRs,26.html

Back  Top

5-2-3LDC Newsletter (September 2013)

In this newsletter:
   

   

New LDC           Website Coming Soon

   

LDC Spoken           Language Sampler - 2nd Release

   

New publications:
   

   

GALE Phase 2           Arabic Broadcast Conversation Speech Part 2

   

GALE Phase 2           Arabic Broadcast Conversation Transcripts Part 2

   

Semantic Textual           Similarity (STS) 2013 Machine Translation

   


   

New LDC Website Coming           Soon
     
      Look for LDC's new website in the coming weeks. We've revamped the       design and site plan to make it easier than ever to find what       you're looking for. The features you use the most -- the catalog,       new corpus releases and user login -- will be a short click away.       We expect the LDC website to be occasionally unavailable for a few       days at the end of September as we make the switch and thank you       in advance for your understanding.
     
   

   

LDC Spoken Language           Sampler - 2nd Release

   

 

   

                     

   

The LDC
        Spoken Language Sampler – 2nd Release
is now available.  It       contains speech and transcript samples from recent releases and is       available at no cost.  Follow the link above to the catalog page,       download and browse.

   
    New publications
   
   

      (1) GALE Phase
        2 Arabic Broadcast Conversation Speech Part 2
was developed       by LDC and is comprised of approximately 128 hours of Arabic       broadcast conversation speech collected in 2007 by LDC as part of       the DARPA GALE (Global Autonomous Language Exploitation) Program.       The data was collected at LDC’s Philadelphia, PA USA facilities       and at three remote collection sites. The combined local and       outsourced broadcast collection supported GALE at a rate of       approximately 300 hours per week of programming from more than 50       broadcast sources for a total of over 30,000 hours of collected       broadcast audio over the life of the program.

   

LDC's local broadcast collection system is       highly automated, easily extensible and robust and capable of       collecting, processing and evaluating hundreds of hours of content       from several dozen sources per day. The broadcast material is       served to the system by a set of free-to-air (FTA) satellite
      receivers, commercial direct satellite systems (DSS) such as       DirecTV, direct broadcast satellite (DBS) receivers, and cable       television (CATV) feeds. The mapping between receivers and       recorders is dynamic and modular; all signal routing is performed       under computer control, using a 256x64 A/V matrix switch. Programs       are recorded in a high bandwidth A/V format and are then processed       to extract audio, to generate keyframes and compressed       audio/video, to produce time-synchronized closed captions (in the       case of North American English) and to generate automatic speech       recognition (ASR) output.

   

The broadcast conversation recordings in this       release feature interviews, call-in programs and round table       discussions focusing principally on current events from several       sources. This release contains 141 audio files presented in .wav,       16000 Hz single-channel 16-bit PCM. Each file was audited by a       native Arabic speaker following Audit Procedure Specification       Version 2.0 which is included in this release.

   

GALE Phase 2 Arabic Broadcast Conversation       Speech Part 2 is distributed on 2 DVD-ROM.
     
      2013 Subscription Members will automatically receive two copies of       this data.  2013 Standard Members may request a copy as part of       their 16 free membership corpora.  Non-members may license this       data for US$2000.
   

   

     

   

(2) GALE Phase
        2 Arabic Broadcast Conversation Transcripts Part 2
was       developed by LDC and contains transcriptions of approximately 128       hours of Arabic broadcast conversation speech collected in 2007 by       LDC, MediaNet, Tunis, Tunisia and MTC, Rabat, Morocco during Phase       2 of the DARPA GALE (Global Autonomous Language Exploitation)       program. The source broadcast conversation recordings feature       interviews, call-in programs and round table discussions focusing       principally on current events from several sources.

   

The transcript files are in plain-text,       tab-delimited format (TDF) with UTF-8 encoding, and the       transcribed data totals 763,945 tokens. The transcripts were       created with the LDC-developed transcription tool, XTrans,       a multi-platform, multilingual, multi-channel transcription tool       that supports manual transcription and annotation of audio       recordings. 

   

The files in this corpus were transcribed by       LDC staff and/or by transcription vendors under contract to LDC.       Transcribers followed LDC’s quick transcription guidelines (QTR)       and quick rich transcription specification (QRTR) both of which       are included in the documentation with this release. QTR       transcription consists of quick (near-)verbatim, time-aligned       transcripts plus speaker identification with minimal additional       mark-up. It does not include sentence unit annotation. QRTR       annotation adds structural information such as topic boundaries       and manual sentence unit annotation to the core components of a       quick transcript.

   

GALE Phase 2 Arabic Broadcast Conversation       Transcripts - Part 2 is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc.  2013 Standard Members       may request a copy as part of their 16 free membership corpora.        Non-members may license this data for US$1500.
   

   

   

   

(3)  Semantic Textual
        Similarity (STS) 2013 Machine Translation
was developed as       part of the STS 2013 Shared Task which was held in conjunction       with *SEM 2013,       the second joint conference on lexical and computational semantics       organized by the ACL (Association of Computational Linguistics)       interest groups SIGLEX       and SIGSEM. It       is comprised of one text file containing 750 English sentence       pairs translated from the Arabic and Chinese newswire and web data       sources.

   

The goal of the Semantic Textual Similarity       (STS) task was to create a unified framework for the evaluation of       semantic textual similarity modules and to characterize their       impact on natural language processing (NLP) applications. STS       measures the degree of semantic equivalence. The STS task was       proposed as an attempt at creating a unified framework that allows       for an extrinsic evaluation of multiple semantic components that       otherwise have historically tended to be evaluated independently       and without characterization of impact on NLP applications. More       information is available at the STS 2013 Shared Task homepage.

   

The source data is Arabic and Chinese newswire       and web data collected by LDC that was translated and used in the       DARPA GALE (Global Autonomous Language Exploitation) program and       in several NIST Open Machine Translation evaluations. Of the 750       sentence pairs, 150 pairs are from the GALE Phase 5 collection and       600 pairs are from NIST 2008-2012 Open Machine Translation       (OpenMT) Progress Test Sets (LDC2013T07).

   

The data was built to identify semantic textual       similarity between two short text passages. The corpus is       comprised of two tab delimited sentences per line. The first       sentence is a translation and the second sentence is a post-edited       translation. Post-editing is a process to improve machine       translation with a minimum of manual labor. The gold standard       similarity values and other STS datasets can be obtained from the       STS homepage, linked above.

   

Semantic Textual Similarity (STS) 2013 Machine       Translation is distributed via web download.

   

2013 Subscription Members will automatically       receive two copies of this data on disc. 2013 Standard Members may       request a copy as part of their 16 free membership corpora.        Non-members may request this data by submitting a signed copy of LDC User
        Agreement for Non-members
.  This data
      is available at no-cost.

      

Back  Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Back  Top

5-2-5OFROM 1er corpus de français de Suisse romande
Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions. 
 
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom
Back  Top

5-2-6Real-world 16-channel noise recordings

We are happy to announce the release of DEMAND, a set of real-world
16-channel noise recordings designed for the evaluation of microphone
array processing techniques.

http://www.irisa.fr/metiss/DEMAND/

1.5 h of noise data were recorded in 18 different indoor and outdoor
environments and are available under the terms of the Creative Commons Attribution-ShareAlike License.

Joachim Thiemann (CNRS - IRISA)
Nobutaka Ito (University of Tokyo)
Emmanuel Vincent (Inria Nancy - Grand Est)

Back  Top

5-2-7Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne

Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne

 

 

Le consortium IRCOM de la TGIR Corpus et l’EquipEx ORTOLANG s’associent pour proposer une aide technique et financière à la finalisation de corpus de données orales ou multimodales à des fins de diffusion et pérennisation par l’intermédiaire de l’EquipEx ORTOLANG. Cet appel ne concerne pas la création de nouveaux corpus mais la finalisation de corpus existants et non-disponibles de manière électronique. Par finalisation, nous entendons le dépôt auprès d’un entrepôt numérique public, et l’entrée dans un circuit d’archivage pérenne. De cette façon, les données de parole qui ont été enrichies par vos recherches vont pouvoir être réutilisées, citées et enrichies à leur tour de manière cumulative pour permettre le développement de nouvelles connaissances, selon les conditions d’utilisation que vous choisirez (sélection de licences d’utilisation correspondant à chacun des corpus déposés).

 

Cet appel d’offre est soumis à plusieurs conditions (voir ci-dessous) et l’aide financière par projet est limitée à 3000 euros. Les demandes seront traitées dans l’ordre où elles seront reçues par l’ IRCOM. Les demandes émanant d’EA ou de petites équipes ne disposant pas de support technique « corpus » seront traitées prioritairement. Les demandes sont à déposer du 1er septembre 2013 au 31 octobre 2013. La décision de financement relèvera du comité de pilotage d’IRCOM. Les demandes non traitées en 2013 sont susceptibles de l’être en 2014. Si vous avez des doutes quant à l’éligibilité de votre projet, n’hésitez pas à nous contacter pour que nous puissions étudier votre demande et adapter nos offres futures.

 

Pour palier la grande disparité dans les niveaux de compétences informatiques des personnes et groupes de travail produisant des corpus, L’ IRCOM propose une aide personnalisée à la finalisation de corpus. Celle-ci sera réalisée par un ingénieur IRCOM en fonction des demandes formulées et adaptées aux types de besoin, qu’ils soient techniques ou financiers.

 

Les conditions nécessaires pour proposer un corpus à finaliser et obtenir une aide d’IRCOM sont :

  • Pouvoir prendre toutes décisions concernant l’utilisation et la diffusion du corpus (propriété intellectuelle en particulier).

  • Disposer de toutes les informations concernant les sources des corpus et le consentement des personnes enregistrées ou filmées.

  • Accorder un droit d’utilisation libre des données ou au minimum un accès libre pour la recherche scientifique.

 

Les demandes peuvent concerner tout type de traitement : traitements de corpus quasi-finalisés (conversion, anonymisation), alignement de corpus déjà transcrits, conversion depuis des formats « traitement de textes », digitalisation de support ancien. Pour toute demande exigeant une intervention manuelle importante, les demandeurs devront s’investir en moyens humains ou financiers à la hauteur des moyens fournis par IRCOM et ORTOLANG.

 

IRCOM est conscient du caractère exceptionnel et exploratoire de cette démarche. Il convient également de rappeler que ce financement est réservé aux corpus déjà largement constitués et ne peuvent intervenir sur des créations ex-nihilo. Pour ces raisons de limitation de moyens, les propositions de corpus les plus avancés dans leur réalisation pourront être traitées en priorité, en accord avec le CP d’IRCOM. Il n’y a toutefois pas de limite « théorique » aux demandes pouvant être faites, IRCOM ayant la possibilité de rediriger les demandes qui ne relèvent pas de ses compétences vers d’autres interlocuteurs.

 

Les propositions de réponse à cet appel d’offre sont à envoyer à ircom.appel.corpus@gmail.com. Les propositions doivent utiliser le formulaire de deux pages figurant ci-dessous. Dans tous les cas, une réponse personnalisée sera renvoyée par IRCOM.

 

Ces propositions doivent présenter les corpus proposés, les données sur les droits d’utilisation et de propriétés et sur la nature des formats ou support utilisés.

 

Cet appel est organisé sous la responsabilité d’IRCOM avec la participation financière conjointe de IRCOM et l’EquipEx ORTOLANG.

 

Pour toute information complémentaire, nous rappelons que le site web de l'Ircom (http://ircom.corpus-ir.fr) est ouvert et propose des ressources à la communauté : glossaire, inventaire des unités et des corpus, ressources logicielles (tutoriaux, comparatifs, outils de conversion), activités des groupes de travail, actualités des formations, ...

L'IRCOM invite les unités à inventorier leur corpus oraux et multimodaux - 70 projets déjà recensés - pour avoir une meilleure visibilité des ressources déjà disponibles même si elles ne sont pas toutes finalisées.

 

Le comité de pilotage IRCOM

 

 

Utiliser ce formulaire pour répondre à l’appel : Merci.

 

Réponse à l’appel à la finalisation de corpus oral ou multimodal

 

Nom du corpus :

 

Nom de la personne à contacter :

Adresse email :

Numéro de téléphone :

 

Nature des données de corpus :

 

Existe-t’il des enregistrements :

Quel média ? Audio, vidéo, autre…

Quelle est la longueur totale des enregistrements ? Nombre de cassettes, nombre d’heures, etc.

Quel type de support ?

Quel format (si connu) ?

 

Existe-t’il des transcriptions :

Quel format ? (papier, traitement de texte, logiciel de transcription)

Quelle quantité (en heures, nombre de mots, ou nombre de transcriptions) ?

 

Disposez vous de métadonnées (présentation des droits d’auteurs et d’usage) ?

 

Disposez-vous d’une description précise des personnes enregistrées ?

 

Disposez-vous d’une attestation de consentement éclairé pour les personnes ayant été enregistrées ? En quelle année (environ) les enregistrements ont eu lieu ?

 

Quelle est la langue des enregistrements ?

 

Le corpus comprend-il des enregistrements d’enfants ou de personnes ayant un trouble du langage ou une pathologie ?

Si oui, de quelle population s’agit-il ?

 

 

Dans un souci d’efficacité et pour vous conseiller dans les meilleurs délais, il nous faut disposer d’exemples des transcriptions ou des enregistrements en votre possession. Nous vous contacterons à ce sujet, mais vous pouvez d’ores et déjà nous adresser par courrier électronique un exemple des données dont vous disposez (transcriptions, métadonnées, adresse de page web contenant les enregistrements).

 

Nous vous remercions par avance de l’intérêt que vous porterez à notre proposition. Pour toutes informations complémentaires veuillez contacter Martine Toda martine.toda@ling.cnrs.fr ou à ircom.appel.corpus@gmail.com.

Back  Top

5-2-8Rhapsodie: un Treebank prosodique et syntaxique de français parlé

Rhapsodie: un Treebank prosodique et syntaxique de français parlé

 

Nous avons le plaisir d'annoncer que la ressource Rhapsodie, Corpus de français parlé annoté pour la prosodie et la syntaxe, est désormais disponible sur http://www.projet-rhapsodie.fr/

 

Le treebank Rhapsodie est composé de 57 échantillons sonores (5 minutes en moyenne, au total 3h de parole, 33000 mots) dotés d’une transcription orthographique et phonétique alignées au son.

 

Il s'agit d’une ressource de français parlé multi genres (parole privée et publique ; monologues et dialogues ; entretiens en face à face vs radiodiffusion, parole plus ou moins interactive et plus ou moins planifiée, séquences descriptives, argumentatives, oratoires et procédurales) articulée autour de sources externes (enregistrements extraits de projets antérieurs, en accord avec les concepteurs initiaux) et internes. Nous tenons en particulier à remercier les responsables des projets CFPP2000, PFC, ESLO, C-Prom ainsi que Mathieu Avanzi, Anne Lacheret, Piet Mertens et Nicolas Obin.

 

Les échantillons sonores (wave & MP3, pitch nettoyé et lissé), les transcriptions orthographiques (txt), les annotations macrosyntaxiques (txt), les annotations prosodiques (xml, textgrid) ainsi que les metadonnées (xml & html) sont téléchargeables librement selon les termes de la licence Creative Commons Attribution - Pas d’utilisation commerciale - Partage dans les mêmes conditions 3.0 France.

Les annotations microsyntaxiques seront disponibles prochainement

 Les métadonnées sont également explorables en ligne grâce à un browser.

 Les tutoriels pour la transcription, les annotations et les requêtes sont disponibles sur le site Rhapsodie.

 Enfin, L’annotation prosodique est interrogeable en ligne grâce au langage de requêtes Rhapsodie QL.

 L'équipe Ressource Rhapsodie (Modyco, Université Paris Ouest Nanterre)

Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong.

 Partenaires : IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence), CLLE-ERSS (Toulouse).

 

********************************************************

Rhapsodie: a Prosodic and Syntactic Treebank for Spoken French

We are pleased to announce that Rhapsodie, a syntactic and prosodic treebank of spoken French created with the aim of modeling the interface between prosody, syntax and discourse in spoken French is now available at   http://www.projet-rhapsodie.fr/

The Rhapsodie treebank is made up of 57 short samples of spoken French (5 minutes long on average, amounting to 3 hours of speech and a 33 000 word corpus) endowed with an orthographical phoneme-aligned transcription . 

The corpus is representative of different genres (private and public speech; monologues and dialogues; face-to-face interviews and broadcasts; more or less interactive discourse; descriptive, argumentative and procedural samples, variations in planning type).

The corpus samples have been mainly drawn from existing corpora of spoken French and partially created within the frame of theRhapsodie project. We would especially like to thank the coordinators of the  CFPP2000, PFC, ESLO, C-Prom projects as well as Piet Mertens, Mathieu Avanzi, Anne Lacheret and Nicolas Obin.

The sound samples (waves, MP3, cleaned and stylized pitch), the orthographic transcriptions (txt), the macrosyntactic annotations (txt), the prosodic annotations  (xml, textgrid) as well as the metadata (xml and html) can be freely downloaded under the terms of the Creative Commons licence Attribution - Noncommercial - Share Alike 3.0 France.

Microsyntactic annotation will be available soon.

The metadata are  searchable on line through a browser.

The prosodic annotation can be explored on line through the Rhapsodie Query Language.

The tutorials of transcription, annotations and Rhapsodie Query Language  are available on the site.

 

The Rhapsodie team (Modyco, Université Paris Ouest Nanterre :

Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong.

Partners: IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence),CLLE-ERSS (Toulouse).

Back  Top

5-2-9COVAREP: A Cooperative Voice Analysis Repository for Speech Technologies
======================
CALL for contributions
======================
 
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
 
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
 
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
 
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
 
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
 
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
  • Only published work can be added to the   repository
  • The code must be available as open source
  • Algorithms should be coded in Matlab, however we   strongly encourage authors to make the code compatible with Octave in order to   maximize usability
  • Contributions have to comply with a Coding   Convention (see GitHub site for coding convention and template). However, only   for normalizing the inputs/outputs and the documentation. There is no   restriction for the content of the functions (though, comments are obviously   encouraged).
 
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
 
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
 
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
 
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
 
 
Back  Top

5-3 Software
5-3-1ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-2VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top

5-3-3Voice analysis toolkit
After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit 
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
 
thanks for this.
John
 
--
Researcher
 
Phonetics and Speech Laboratory (Room 4074) Arts Block,
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2
Phone:    (+353) 1 896 1348 Website:  http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php
Check out our workshop!! http://muster.ucd.ie/workshops/iast/
Back  Top

5-3-4Bob signal-processing and machine learning toolbox (v.1.2..0)


    The release 1.2.0 of the Bob
      signal-processing and machine learning toolbox
is available .

    Bob provides both efficient implementations of several machine     learning algorithms as well as a framework to help researchers to     publish reproducible research.
   
   

It is developed by the Biometrics
        Group
at Idiap in       Switzerland.

   
    The previous release of Bob was providing:
    * image, video and audio IO
      interfaces
such as jpg, avi, wav, 

    * database
      accessors
such as FRGC, Labelled Face in the Wild, and many     others,

    *       image processing: Local Binary Patterns (LBPs), Gabor Jets,     SIFT,
    * machines
      and trainers
such as Support Vector Machines (SVMs), k-Means,     Gaussian Mixture Models (GMMs), Inter-Session Variability modeling     (ISV), Joint Factor Analysis (JFA), Probabilistic Linear     Discriminant Analysis (PLDA), Bayesian intra/extra (personal)     classifier,

   
    The new release of Bob has brought the following features and/or     improvements, such as:
    * Unified implementation of Local Binary Patterns (LBPs),
    * Histograms of Oriented Gradients (HOG) implementation,
    * Total variability (i-vector) implementation,
    * Conjugate gradient based-implementation for logistic regression,
    * Improved multi-layer perceptrons implementation (Back-propagation     can now be easily used in combination with any optimizer -- i.e     L-BFGS),
    * Pseudo-inverse-based method for Linear Discriminant Analysis,
    * Covariance-based method for Principal Component Analysis,
    * Whitening and within-class covariance normalization techniques,
    * Module for object detection and keypoint localization     (bob.visioner),
    * Module for       audio processing including feature extraction such as LFCC and     MFCC,
    * Improved extensions (satellite packages), that now support both     Python and C++ code, within an easy to use framework,
    * Improved documentation and add new tutorials,
    * Support for Intel's MKL (in addition to ATLAS),
    * Extend supported platforms (Arch Linux).
   
    This release represents a major milestone in Bob with plenty of     functionality improvements (>640       commits in total) and plenty of bug
      fixes
.

    • Sources and       Documentation
    • Binary packages:
    •     Ubuntu: 10.04, 12.04, 12.10 and 13.04
    • For     Mac OSX: works with 10.6 (Snow Leopard), 10.7 (Lion) and 10.8     (Mountain Lion)
   
    For instructions on how to install pre-packaged version on Ubuntu or     OSX, consult our quick       installation instructions  (N.B. OS X macport has not yet been     upgraded. This will be done very soon. cf. https://trac.macports.org/ticket/39831 ).
   
   
    Best regards,
    Elie Khoury (on Behalf of the Biometric
      Group at Idiap
lead by Sebastien
      Marcel
)

   
     
     ---    

-- ------------------- Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
Back  Top

5-3-5An open-source repository of advanced speech processing algorithms called COVAREP
CALL for contributions
======================
 
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
 
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
 
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
 
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
 
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
 
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
  • Only published work can be added to the repository
  • The code must be available as open source
  • Algorithms should be coded in Matlab, however we strongly encourage authors to make the code compatible with Octave in order to maximize usability
  • Contributions have to comply with a Coding Convention (see GitHub site for coding convention and template). However, only for normalizing the inputs/outputs and the documentation. There is no restriction for the content of the functions (though, comments are obviously encouraged).
 
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
 
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
 
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
 
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA