ISCA - International Speech
Communication Association


ISCApad Archive  »  2015  »  ISCApad #201  »  Resources

ISCApad #201

Wednesday, March 11, 2015 by Chris Wellekens

5 Resources
5-1 Books
5-1-1Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley

Techniques for Noise Robustness in Automatic Speech Recognition
Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors)
ISBN: 978-1-1199-7088-0
Publisher: Wiley

Automatic speech recognition (ASR) systems are finding increasing use in everyday life. Many of the commonplace environments where the systems are used are noisy, for example users calling up a voice search system from a busy cafeteria or a street. This can result in degraded speech recordings and adversely affect the performance of speech recognition systems. As the use of ASR systems increases, knowledge of the state-of-the-art in techniques to deal with such problems becomes critical to system and application engineers and researchers who work with or on ASR technologies. This book presents a comprehensive survey of the state-of-the-art in techniques used to improve the robustness of speech recognition systems to these degrading external influences.

Key features:

*Reviews all the main noise robust ASR approaches, including signal separation, voice activity detection, robust feature extraction, model compensation and adaptation, missing data techniques and recognition of reverberant speech.
*Acts as a timely exposition of the topic in light of more widespread use in the future of ASR technology in challenging environments.
*Addresses robustness issues and signal degradation which are both key requirements for practitioners of ASR.
*Includes contributions from top ASR researchers from leading research units in the field

Top

5-1-2Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication

Understanding Prosody: The Role of Context, Function and Communication

Ed. by Niebuhr, Oliver

Series:Language, Context and Cognition 13,   De Gruyter

http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238

The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.

 

Top

5-1-3Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p)
 Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p). 
Sommaire : 
Avant –propos, Introduction, ;
 Ch.1 : Eléments de définition ; 
 Ch 2. Situation de la prosodie dans le champ des sciences du langage et dans l’étude de la communication ; 
Ch 3. La prosodie sur les deux versants de la communication orale interindividuelle (production et compréhension) ; 
Ch 4. La prosodie et le cerveau ;
 Ch 5. La matérialité de la prosodie ; 
Ch 6. Les niveau d’analyse et de représentation de la prosodie ; 
Ch 7. Les théories, les modèles de la prosodie et leurs appareils formels ;
 Ch 8 La fonctionnalité plurielle de la prosodie ; 
Ch 9. Les relations de la prosodie avec les sens ; 
Epilogue. 
Suggestions de lecture ;
 Index des termes ; 
Index des noms propres.
Top

5-1-4Pierre-Yves Oudeyer, 'Aux sources de la parole: auto-organisation et évolution', Odile Jacob
Pierre-Yves Oudeyer, dir. rech. Inria, vient de publier 'Aux sources de la parole: auto-organisation et évolution', chez Odile Jacob (Sept. 2013).
 
Il discute de la question de l'évolution et de l'acquisition de la parole, chez l'enfant et chez les robots.
 
En faisant dialoguer biologie, linguistique, neurosciences et expériences robotiques, 
ce livre étudie en particulier les phénomènes d'auto-organisation, permettant la formation spontanée de langues nouvelles dans une population d'individus. 
Il présente en particulier des expériences dans lesquelles une population de robots numériques invente, forme, et négotie son propre système de parole
et explique comment de telles expériences robotiques peuvent nous aider à mieux comprendre l'homme.
 
Il présente aussi des expérimentations robotiques récentes, et à partir de perspectives nouvelles en intelligence artificielle, dans lesquelles des mécanismes de curiosité permettent à un robot de découvrir par lui-même son corps, les objets qui l'entourent, et finalement les interactions vocales avec ses pairs. C'est ainsi que s'auto-organise son propre développement cognitif, et qu'apparaissent des hypothèses nouvelles pour comprendre le développement chez l'enfant.
 
Site web du livre: http://goo.gl/A6EwTJ
 
 
 
Pierre-Yves Oudeyer,
Directeur de recherche, Inria
Responsable de l'équipe Flowers
Inria Bordeaux Sud-Ouest et Ensta-ParisTech, France
Top

5-1-5Björn Schuller, Anton Batliner , Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing, Wiley, ISBN: 978-1-119-97136-8, 344 pages, November 2013
Björn Schuller, Anton Batliner Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing Wiley, ISBN: 978-1-119-97136-8, 344 pages, November 2013 Description - This book presents the methods, tools and techniques that are currently being used to recognise (automatically) the affect, emotion, personality and everything else beyond linguistics (‘paralinguistics’) expressed by or embedded in human speech and language. - It is the first book to provide such a systematic survey of paralinguistics in speech and language processing. The technology described has evolved mainly from automatic speech and speaker recognition and processing, but also takes into account recent developments within speech signal processing, machine intelligence and data mining. - Moreover, the book offers a hands-on approach by integrating actual data sets, software, and open-source utilities which will make the book invaluable as a teaching tool and similarly useful for those professionals already in the field. Key features: - Provides an integrated presentation of basic research (in phonetics/linguistics and humanities) with state-of-the-art engineering approaches for speech signal processing and machine intelligence. - Explains the history and state of the art of all of the sub-fields which contribute to the topic of computational paralinguistics. - Covers the signal processing and machine learning aspects of the actual computational modelling of emotion and personality and explains the detection process from corpus collection to feature extraction and from model testing to system integration. - Details aspects of real-world system integration including distribution, weakly supervised learning and confidence measures. - Outlines machine learning approaches including static, dynamic and context-sensitive algorithms for classification and regression. - Includes a tutorial on freely available toolkits, such as the open-source ‘openEAR’ toolkit for emotion and affect recognition co-developed by one of the authors, and a listing of standard databases and feature sets used in the field to allow for immediate experimentation enabling the reader to build an emotion detection model on an existing corpus. Links: - The book: http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1119971365.html - Table of Contents (pdf): http://media.wiley.com/product_data/excerpt/65/11199713/1119971365-16.pdf - Chapter01 (pdf): http://media.wiley.com/product_data/excerpt/65/11199713/1119971365-14.pdf 

 

Top

5-1-6Li Deng and Dong Yu, Deep Learning: Methods and Applications, Foundations and Trends in Signal Processing
Foundations and Trends in Signal Processing (www.nowpublishers.com/sig) has published the following issue:   

Volume 7, Issue 3-4                                                                                                                                                                   
Deep Learning: Methods and Applications                                                               
By Li Deng and Dong Yu (Microsoft Research, USA)       
http://dx.doi.org/10.1561/2000000039                                       
Top

5-2 Database
5-2-1ELRA - Language Resources Catalogue - Update (2015-02)
We are happy to announce that 1 new Written Corpus and 3 Evaluation Packages are now available in our catalogue.

ELRA-W0082 88milSMS. A corpus of authentic text messages in French
ISLRN: 024-713-187-947-8
A pluridisciplinary team of linguists and computer scientists collected more than 88,000 French authentic text messages in Montpellier (2011), as part of the sud4science LR project. The text messages were semi-automatically anonymised, before being partially transcoded (into standardised French) and annotated.


ELRA-E0043 CLEFeHealth 2014 Task 3 Evaluation Package
ISLRN: 725-020-897-275-7
The CLEFeHealth 2014 Task 3 Evaluation Package contains data used for the User-centred health information retrieval Shared task at the CLEFeHealth Lab conducted in 2014. Task 3 aimed at evaluating information retrieval to address questions patients may have when reading clinical reports.

ELRA-E0044 REPERE Evaluation Package
ISLRN: 360-758-359-485-0
The REPERE Evaluation Package contains the visual annotation of 60 hours of French news TV shows, for the purpose of person recognition within TV programs. This annotation concerns both persons and written information appearing on screen.
Provided data consists of:
- video files with indexes and with manual transcriptions in XGTF format (Viper),
- audio files compressed in WAV format with transcriptions in TRS format (Transcriber).

ELRA-E0045 MAURDOR Evaluation Package
ISLRN: 364-018-517-901-2
The MAURDOR project consists in evaluating systems for automatic processing of written documents. Collected written documents are scanned documents (printed, typewritten or manuscripts). This package contains 8,129 documents. Once collected, those documents were submitted to a manual annotation. This package contains the material provided to the evaluation campaign participants:
 - Consistent development and test data corresponding to the application concerned;
- Tools for the automatic measurement of system performances;
- A common assessment protocol applicable to each processing stage, along with a complete automatic processing chain for written documents.
The documents are provided in TIFF format and the annotations are provided in XML format.


For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements/
Follow us on Twitter @ELRANews
 
 
 
 
 
 

 
 
Top

5-2-2ELRA releases free Language Resources

ELRA releases free Language Resources
***************************************************

Anticipating users’ expectations, ELRA has decided to offer a large number of resources for free for Academic research use. Such an offer consists of several sets of speech, text and multimodal resources that are regularly released, for free, as soon as legal aspects are cleared. A first set was released in May 2012 at the occasion of LREC 2012. A second set is now being released.

Whenever this is permitted by our licences, please feel free to use these resources for deriving new resources and depositing them with the ELRA catalogue for community re-use.

Over the last decade, ELRA has compiled a large list of resources into its Catalogue of LRs. ELRA has negotiated distribution rights with the LR owners and made such resources available under fair conditions and within a clear legal framework. Following this initiative, ELRA has also worked on LR discovery and identification with a dedicated team which investigated and listed existing and valuable resources in its 'Universal Catalogue', a list of resources that could be negotiated on a case-by-case basis. At LREC 2010, ELRA introduced the LRE Map, an inventory of LRs, whether developed or used, that were described in LREC papers. This huge inventory listed by the authors themselves constitutes the first 'community-built' catalogue of existing or emerging resources, constantly enriched and updated at major conferences.

Considering the latest trends on easing the sharing of LRs, from both legal and commercial points of view, ELRA is taking a major role in META-SHARE, a large European open infrastructure for sharing LRs. This infrastructure will allow LR owners, providers and distributors to distribute their LRs through an additional and cost-effective channel.

To obtain the available sets of LRs, please visit the web page below and follow the instructions given online:
http://www.elra.info/Free-LRs,26.html

Top

5-2-3LDC Newsletter (February 2015)

 In this newsletter:

Only two weeks left to enjoy 2015 membership savings 

New publications:

Avocado Research Email Collection

GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 3

RATS Speech Activity Detection



Only two weeks left to enjoy 2015 membership savings 

There’s still
time to save on 2015 membership fees. Now through March 2, all organizations will receive a 5% discount when they join for MY2015. MY2014 members are eligible for an additional 5% off the fee when they renew before March 2.  

Don’t miss this savings opportunity. Secure your membership today for access to new corpora as well as discounts on our existing catalog of over 600 holdings. 2015 publications include the following:

·         CIEMPIESS - Mexican Spanish radio broadcast audio and transcripts     

·         GALE Phase 3 and 4 data – all tasks and languages

·         Mandarin Chinese Phonetic Segmentation and Tone Corpus - phonetic segmentation and tone labels  

·         RATS Speech Activity Detection  – multilanguage audio for robust speech detection and language identification

·         SEAME - Mandarin-English code-switching speech

To join, create or sign into your LDC user account, select your preferred membership type from the Catalog, add the item to your bin and follow the check-out process. The Membership Office will apply any discounts. Alternatively, if you have already received a renewal invoice from LDC, you can simply pay against that.

For more information on the benefits of membership, visit Join LDC

 

New publications


(1) Avocado Research Email Collection consists of emails and attachments taken from 279 accounts of a defunct information technology company referred to as 'Avocado'. Most of the accounts are those of Avocado employees; the remainder represent shared accounts such as 'Leads', or system accounts such as 'Conference Room Upper Canada'.

The collection consists of the processed personal folders of these accounts with metadata describing folder structure, email characteristics and contacts, among others. It is expected to be useful for social network analysis, e-discovery and related fields.


The source data for the collection consisted of Personal Storage Table (PST) files for 282 accounts. A PST file is used by MS Outlook to store emails, calendar entries, contact details, and related information. Data was extracted from the PST files using libpst version 0.6.54. Three files produced no output and and are not included in the collection. Each account is referred to as a 'custodian' although some of the accounts do not correspond to humans.

The collection is divided into metadata and text. The metadata is represented in XML, with a single top-level XML file listing the custodians, and then one XML file per custodian listing all items extracted from that custodian's PST files. The full XML tree can be read by loading the top-level file with an XML parser that handles directives. All XML metadata files are encoded in UTF-8. The text contains the extracted text of the items in the custodians' folders, with the extracted text for each item being held in a separate file. The text files are then zipped into a zip file per custodian.

Avocado Research Email Collection is distributed on 1 DVD-ROM. 2015 Subscription Members will automatically receive two copies of this corpus provided that they have completed the license agreement.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$1500.

*


(2) GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 3 was developed by LDC and contains 242,020 tokens of word aligned Chinese and English parallel text enriched with linguistic tags. This material was used as training data in the DARPA GALE (Global Autonomous Language Exploitation) program. Some approaches to statistical machine translation include the incorporation of linguistic knowledge in word aligned text as a means to improve automatic word alignment and machine translation quality. This is accomplished with two annotation schemes: alignment and tagging. Alignment identifies minimum translation units and translation relations by using minimum-match and attachment annotation approaches. A set of word tags and alignment link tags are designed in the tagging scheme to describe these translation units and relations. Tagging adds contextual, syntactic and language-specific features to the alignment annotation.

This release consists of Chinese source broadcast conversation (BC) and broadcast news (BN) programming collected by LDC in 2008 and 2009. The distribution by genre, words, character tokens and segments appears below:


Language

Genre

Files

Words

CharTokens

Segments

Chinese

BC

92

67,354

101,032

2,714

Chinese

BN

34

93,992

140,988

3,314

Total

 

126

161,346

242,020

6,028


Note that all token counts are based on the Chinese data only. One token is equivalent to one character and one word is equivalent to 1.5 characters.

The Chinese word alignment tasks consisted of the following components:

  • Identifying, aligning, and tagging eight different types of links
  • Identifying, attaching, and tagging local-level unmatched words
  • Identifying and tagging sentence/discourse-level unmatched words
  • Identifying and tagging all instances of Chinese 的 (DE) except when they were a part of a semantic link


GALE Chinese-English Word Alignment and Tagging -- Broadcast Training Part 3 is distributed via web download. 2015 Subscription Members will automatically receive two copies of this corpus.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$1750.

*


(3) RATS Speech Activity Detection was developed by LDC and is comprised of approximately 3,000 hours of Levantine Arabic, English, Farsi, Pashto, and Urdu conversational telephone speech with automatic and manual annotation of speech segments. The corpus was created to provide training, development and initial test sets for the Speech Activity Detection (SAD) task in the DARPA RATS (Robust Automatic Transcription of Speech) program.

The goal of the RATS program was to develop human language technology systems capable of performing speech detection, language identification, speaker identification and keyword spotting on the severely degraded audio signals that are typical of various radio communication channels, especially those employing various types of handheld portable transceiver systems. To support that goal, LDC assembled a system for the transmission, reception and digital capture of audio data that allowed a single source audio signal to be distributed and recorded over eight distinct transceiver configurations simultaneously. Those configurations included three frequencies -- high, very high and ultra high -- variously combined with amplitude modulation, frequency hopping spread spectrum, narrow-band frequency modulation, single-side-band or wide-band frequency modulation. Annotations on the clear source audio signal, e.g., time boundaries for the duration of speech activity, were projected onto the corresponding eight channels recorded from the radio receivers.

The source audio consists of conversational telephone speech recordings collected by LDC: (1) data collected for the RATS program from Levantine Arabic, Farsi, Pashto and Urdu speakers; and (2) material from the Fisher English (LDC2004S13, LDC2005S13), and Fisher Levantine Arabic telephone studies (LDC2007S02), as well as from CALLFRIEND Farsi (LDC2014S01).

Annotation was performed in three steps. LDC's automatic speech activity detector was run against the audio data to produce a speech segmentation for each file. Manual first pass annotation was then performed as a quick correction of the automatic speech activity detection output. Finally, in a manual second pass annotation step, annotators reviewed first pass output and made adjustments to segments as needed.

All audio files are presented as single-channel, 16-bit PCM, 16000 samples per second; lossless FLAC compression is used on all files; when uncompressed, the files have typical 'MS-WAV' (RIFF) file headers.

RATS Speech Activity Detection is distributed on 1 hard drive.  2015 Subscription Members will automatically receive one copy of this corpus.  2015 Standard Members may request a copy as part of their 16 free membership corpora.  Non-members may license this data for US$7500.

 

 

Top

5-2-4Appen ButlerHill

 

Appen ButlerHill 

A global leader in linguistic technology solutions

RECENT CATALOG ADDITIONS—MARCH 2012

1. Speech Databases

1.1 Telephony

1.1 Telephony

Language

Database Type

Catalogue Code

Speakers

Status

Bahasa Indonesia

Conversational

BAH_ASR001

1,002

Available

Bengali

Conversational

BEN_ASR001

1,000

Available

Bulgarian

Conversational

BUL_ASR001

217

Available shortly

Croatian

Conversational

CRO_ASR001

200

Available shortly

Dari

Conversational

DAR_ASR001

500

Available

Dutch

Conversational

NLD_ASR001

200

Available

Eastern Algerian Arabic

Conversational

EAR_ASR001

496

Available

English (UK)

Conversational

UKE_ASR001

1,150

Available

Farsi/Persian

Scripted

FAR_ASR001

789

Available

Farsi/Persian

Conversational

FAR_ASR002

1,000

Available

French (EU)

Conversational

FRF_ASR001

563

Available

French (EU)

Voicemail

FRF_ASR002

550

Available

German

Voicemail

DEU_ASR002

890

Available

Hebrew

Conversational

HEB_ASR001

200

Available shortly

Italian

Conversational

ITA_ASR003

200

Available shortly

Italian

Voicemail

ITA_ASR004

550

Available

Kannada

Conversational

KAN_ASR001

1,000

In development

Pashto

Conversational

PAS_ASR001

967

Available

Portuguese (EU)

Conversational

PTP_ASR001

200

Available shortly

Romanian

Conversational

ROM_ASR001

200

Available shortly

Russian

Conversational

RUS_ASR001

200

Available

Somali

Conversational

SOM_ASR001

1,000

Available

Spanish (EU)

Voicemail

ESO_ASR002

500

Available

Turkish

Conversational

TUR_ASR001

200

Available

Urdu

Conversational

URD_ASR001

1,000

Available

1.2 Wideband

Language

Database Type

Catalogue Code

Speakers

Status

English (US)

Studio

USE_ASR001

200

Available

French (Canadian)

Home/ Office

FRC_ASR002

120

Available

German

Studio

DEU_ASR001

127

Available

Thai

Home/Office

THA_ASR001

100

Available

Korean

Home/Office

KOR_ASR001

100

Available

2. Pronunciation Lexica

Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include:

Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate)

Part-of-speech tagged Lexica providing grammatical and semantic labels

Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists.

Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

3. Named Entity Corpora

Language

Catalogue Code

Words

Description

Arabic

ARB_NER001

500,000

These NER Corpora contain text material from a vari-ety of sources and are tagged for the following Named Entities: Person, Organization, Location, Na-tionality, Religion, Facility, Geo-Political Entity, Titles, Quantities

English

ENI_NER001

500,000

Farsi/Persian

FAR_NER001

500,000

Korean

KOR_NER001

500,000

Japanese

JPY_NER001

500,000

Russian

RUS_NER001

500,000

Mandarin

MAN_NER001

500,000

Urdu

URD_NER001

500,000

4. Other Language Resources

Morphological Analyzers – Farsi/Persian & Urdu

Arabic Thesaurus

Language Analysis Documentation – multiple languages

 

For additional information on these resources, please contact: sales@appenbutlerhill.com

5. Customized Requests and Package Configurations

Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations.

We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.

6. Contact Information

Prithivi Pradeep

Business Development Manager

ppradeep@appenbutlerhill.com

+61 2 9468 6370

Tom Dibert

Vice President, Business Development, North America

tdibert@appenbutlerhill.com

+1-315-339-6165

                                                         www.appenbutlerhill.com

Top

5-2-5OFROM 1er corpus de français de Suisse romande
Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions. 
 
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom
Top

5-2-6Real-world 16-channel noise recordings

We are happy to announce the release of DEMAND, a set of real-world
16-channel noise recordings designed for the evaluation of microphone
array processing techniques.

http://www.irisa.fr/metiss/DEMAND/

1.5 h of noise data were recorded in 18 different indoor and outdoor
environments and are available under the terms of the Creative Commons Attribution-ShareAlike License.

Joachim Thiemann (CNRS - IRISA)
Nobutaka Ito (University of Tokyo)
Emmanuel Vincent (Inria Nancy - Grand Est)

Top

5-2-7Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne

Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne

 

 

Le consortium IRCOM de la TGIR Corpus et l’EquipEx ORTOLANG s’associent pour proposer une aide technique et financière à la finalisation de corpus de données orales ou multimodales à des fins de diffusion et pérennisation par l’intermédiaire de l’EquipEx ORTOLANG. Cet appel ne concerne pas la création de nouveaux corpus mais la finalisation de corpus existants et non-disponibles de manière électronique. Par finalisation, nous entendons le dépôt auprès d’un entrepôt numérique public, et l’entrée dans un circuit d’archivage pérenne. De cette façon, les données de parole qui ont été enrichies par vos recherches vont pouvoir être réutilisées, citées et enrichies à leur tour de manière cumulative pour permettre le développement de nouvelles connaissances, selon les conditions d’utilisation que vous choisirez (sélection de licences d’utilisation correspondant à chacun des corpus déposés).

 

Cet appel d’offre est soumis à plusieurs conditions (voir ci-dessous) et l’aide financière par projet est limitée à 3000 euros. Les demandes seront traitées dans l’ordre où elles seront reçues par l’ IRCOM. Les demandes émanant d’EA ou de petites équipes ne disposant pas de support technique « corpus » seront traitées prioritairement. Les demandes sont à déposer du 1er septembre 2013 au 31 octobre 2013. La décision de financement relèvera du comité de pilotage d’IRCOM. Les demandes non traitées en 2013 sont susceptibles de l’être en 2014. Si vous avez des doutes quant à l’éligibilité de votre projet, n’hésitez pas à nous contacter pour que nous puissions étudier votre demande et adapter nos offres futures.

 

Pour palier la grande disparité dans les niveaux de compétences informatiques des personnes et groupes de travail produisant des corpus, L’ IRCOM propose une aide personnalisée à la finalisation de corpus. Celle-ci sera réalisée par un ingénieur IRCOM en fonction des demandes formulées et adaptées aux types de besoin, qu’ils soient techniques ou financiers.

 

Les conditions nécessaires pour proposer un corpus à finaliser et obtenir une aide d’IRCOM sont :

  • Pouvoir prendre toutes décisions concernant l’utilisation et la diffusion du corpus (propriété intellectuelle en particulier).

  • Disposer de toutes les informations concernant les sources des corpus et le consentement des personnes enregistrées ou filmées.

  • Accorder un droit d’utilisation libre des données ou au minimum un accès libre pour la recherche scientifique.

 

Les demandes peuvent concerner tout type de traitement : traitements de corpus quasi-finalisés (conversion, anonymisation), alignement de corpus déjà transcrits, conversion depuis des formats « traitement de textes », digitalisation de support ancien. Pour toute demande exigeant une intervention manuelle importante, les demandeurs devront s’investir en moyens humains ou financiers à la hauteur des moyens fournis par IRCOM et ORTOLANG.

 

IRCOM est conscient du caractère exceptionnel et exploratoire de cette démarche. Il convient également de rappeler que ce financement est réservé aux corpus déjà largement constitués et ne peuvent intervenir sur des créations ex-nihilo. Pour ces raisons de limitation de moyens, les propositions de corpus les plus avancés dans leur réalisation pourront être traitées en priorité, en accord avec le CP d’IRCOM. Il n’y a toutefois pas de limite « théorique » aux demandes pouvant être faites, IRCOM ayant la possibilité de rediriger les demandes qui ne relèvent pas de ses compétences vers d’autres interlocuteurs.

 

Les propositions de réponse à cet appel d’offre sont à envoyer à ircom.appel.corpus@gmail.com. Les propositions doivent utiliser le formulaire de deux pages figurant ci-dessous. Dans tous les cas, une réponse personnalisée sera renvoyée par IRCOM.

 

Ces propositions doivent présenter les corpus proposés, les données sur les droits d’utilisation et de propriétés et sur la nature des formats ou support utilisés.

 

Cet appel est organisé sous la responsabilité d’IRCOM avec la participation financière conjointe de IRCOM et l’EquipEx ORTOLANG.

 

Pour toute information complémentaire, nous rappelons que le site web de l'Ircom (http://ircom.corpus-ir.fr) est ouvert et propose des ressources à la communauté : glossaire, inventaire des unités et des corpus, ressources logicielles (tutoriaux, comparatifs, outils de conversion), activités des groupes de travail, actualités des formations, ...

L'IRCOM invite les unités à inventorier leur corpus oraux et multimodaux - 70 projets déjà recensés - pour avoir une meilleure visibilité des ressources déjà disponibles même si elles ne sont pas toutes finalisées.

 

Le comité de pilotage IRCOM

 

 

Utiliser ce formulaire pour répondre à l’appel : Merci.

 

Réponse à l’appel à la finalisation de corpus oral ou multimodal

 

Nom du corpus :

 

Nom de la personne à contacter :

Adresse email :

Numéro de téléphone :

 

Nature des données de corpus :

 

Existe-t’il des enregistrements :

Quel média ? Audio, vidéo, autre…

Quelle est la longueur totale des enregistrements ? Nombre de cassettes, nombre d’heures, etc.

Quel type de support ?

Quel format (si connu) ?

 

Existe-t’il des transcriptions :

Quel format ? (papier, traitement de texte, logiciel de transcription)

Quelle quantité (en heures, nombre de mots, ou nombre de transcriptions) ?

 

Disposez vous de métadonnées (présentation des droits d’auteurs et d’usage) ?

 

Disposez-vous d’une description précise des personnes enregistrées ?

 

Disposez-vous d’une attestation de consentement éclairé pour les personnes ayant été enregistrées ? En quelle année (environ) les enregistrements ont eu lieu ?

 

Quelle est la langue des enregistrements ?

 

Le corpus comprend-il des enregistrements d’enfants ou de personnes ayant un trouble du langage ou une pathologie ?

Si oui, de quelle population s’agit-il ?

 

 

Dans un souci d’efficacité et pour vous conseiller dans les meilleurs délais, il nous faut disposer d’exemples des transcriptions ou des enregistrements en votre possession. Nous vous contacterons à ce sujet, mais vous pouvez d’ores et déjà nous adresser par courrier électronique un exemple des données dont vous disposez (transcriptions, métadonnées, adresse de page web contenant les enregistrements).

 

Nous vous remercions par avance de l’intérêt que vous porterez à notre proposition. Pour toutes informations complémentaires veuillez contacter Martine Toda martine.toda@ling.cnrs.fr ou à ircom.appel.corpus@gmail.com.

Top

5-2-8Rhapsodie: un Treebank prosodique et syntaxique de français parlé

Rhapsodie: un Treebank prosodique et syntaxique de français parlé

 

Nous avons le plaisir d'annoncer que la ressource Rhapsodie, Corpus de français parlé annoté pour la prosodie et la syntaxe, est désormais disponible sur http://www.projet-rhapsodie.fr/

 

Le treebank Rhapsodie est composé de 57 échantillons sonores (5 minutes en moyenne, au total 3h de parole, 33000 mots) dotés d’une transcription orthographique et phonétique alignées au son.

 

Il s'agit d’une ressource de français parlé multi genres (parole privée et publique ; monologues et dialogues ; entretiens en face à face vs radiodiffusion, parole plus ou moins interactive et plus ou moins planifiée, séquences descriptives, argumentatives, oratoires et procédurales) articulée autour de sources externes (enregistrements extraits de projets antérieurs, en accord avec les concepteurs initiaux) et internes. Nous tenons en particulier à remercier les responsables des projets CFPP2000, PFC, ESLO, C-Prom ainsi que Mathieu Avanzi, Anne Lacheret, Piet Mertens et Nicolas Obin.

 

Les échantillons sonores (wave & MP3, pitch nettoyé et lissé), les transcriptions orthographiques (txt), les annotations macrosyntaxiques (txt), les annotations prosodiques (xml, textgrid) ainsi que les metadonnées (xml & html) sont téléchargeables librement selon les termes de la licence Creative Commons Attribution - Pas d’utilisation commerciale - Partage dans les mêmes conditions 3.0 France.

Les annotations microsyntaxiques seront disponibles prochainement

 Les métadonnées sont également explorables en ligne grâce à un browser.

 Les tutoriels pour la transcription, les annotations et les requêtes sont disponibles sur le site Rhapsodie.

 Enfin, L’annotation prosodique est interrogeable en ligne grâce au langage de requêtes Rhapsodie QL.

 L'équipe Ressource Rhapsodie (Modyco, Université Paris Ouest Nanterre)

Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong.

 Partenaires : IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence), CLLE-ERSS (Toulouse).

 

********************************************************

Rhapsodie: a Prosodic and Syntactic Treebank for Spoken French

We are pleased to announce that Rhapsodie, a syntactic and prosodic treebank of spoken French created with the aim of modeling the interface between prosody, syntax and discourse in spoken French is now available at   http://www.projet-rhapsodie.fr/

The Rhapsodie treebank is made up of 57 short samples of spoken French (5 minutes long on average, amounting to 3 hours of speech and a 33 000 word corpus) endowed with an orthographical phoneme-aligned transcription . 

The corpus is representative of different genres (private and public speech; monologues and dialogues; face-to-face interviews and broadcasts; more or less interactive discourse; descriptive, argumentative and procedural samples, variations in planning type).

The corpus samples have been mainly drawn from existing corpora of spoken French and partially created within the frame of theRhapsodie project. We would especially like to thank the coordinators of the  CFPP2000, PFC, ESLO, C-Prom projects as well as Piet Mertens, Mathieu Avanzi, Anne Lacheret and Nicolas Obin.

The sound samples (waves, MP3, cleaned and stylized pitch), the orthographic transcriptions (txt), the macrosyntactic annotations (txt), the prosodic annotations  (xml, textgrid) as well as the metadata (xml and html) can be freely downloaded under the terms of the Creative Commons licence Attribution - Noncommercial - Share Alike 3.0 France.

Microsyntactic annotation will be available soon.

The metadata are  searchable on line through a browser.

The prosodic annotation can be explored on line through the Rhapsodie Query Language.

The tutorials of transcription, annotations and Rhapsodie Query Language  are available on the site.

 

The Rhapsodie team (Modyco, Université Paris Ouest Nanterre :

Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong.

Partners: IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence),CLLE-ERSS (Toulouse).

Top

5-2-9Annotation of “Hannah and her sisters” by Woody Allen.

We have created and made publicly available a dense audio-visual person-oriented ground-truth annotation of a feature movie (100 minutes long): “Hannah and her sisters” by Woody Allen.

The annotation includes

•          Face tracks in video (densely annotated, i.e., in each frame, and person-labeled)

•             Speech segments in audio (person-labeled)

•             Shot boundaries in video



The annotation can be useful for evaluating



•   Person-oriented video-based tasks (e.g., face tracking, automatic character naming, etc.)

•             Person-oriented audio-based tasks (e.g., speaker diarization or recognition)

•             Person-oriented multimodal-based tasks (e.g., audio-visual character naming)



Detail on Hannah dataset and access to it can be obtained there:

https://research.technicolor.com/rennes/hannah-home/

https://research.technicolor.com/rennes/hannah-download/



Acknowledgments:

This work is supported by AXES EU project: http://www.axes-project.eu/










Alexey Ozerov Alexey.Ozerov@technicolor.com

Jean-Ronan Vigouroux,

Louis Chevallier

Patrick Pérez

Technicolor Research & Innovation



 

Top

5-2-10French TTS

Text to         Speech Synthesis:
      over an hour of speech       synthesis samples from         1968 to 2001 by       25 French, Canadian, US , Belgian,       Swedish, Swiss systems
     
     
33 ans de synthèse de la parole à         partir du texte: une promenade sonore (1968-2001)
        (33 years of
Text to Speech Synthesis       in French : an audio tour (1968-2001)       )
      Christophe d'Alessandro
      Article published in         Volume 42 - No. 1/2001 issue of 
Traitement       Automatique des Langues  (TAL,       Editions Hermes),         pp. 297-321.
     
      posted to:
      http://groupeaa.limsi.fr/corpus:synthese:start

Top

5-2-11Google 's Language Model benchmark
 Here is a brief description of the project.

'The purpose of the project is to make available a standard training and test setup for language modeling experiments.

The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here.

This also means that your results on this data set are reproducible by the research community at large.

Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:

  • unpruned Katz (1.1B n-grams),
  • pruned Katz (~15M n-grams),
  • unpruned Interpolated Kneser-Ney (1.1B n-grams),
  • pruned Interpolated Kneser-Ney (~15M n-grams)

 

Happy benchmarking!'

Top

5-2-12International Standard Language Resource Number (ISLRN) (ELRA Press release)

Press Release - Immediate - Paris, France, December 13, 2013

Establishing the International Standard Language Resource Number (ISLRN)

12 major NLP organisations announce the establishment of the ISLRN, a Persistent Unique Identifier, to be assigned to each Language Resource.

On November 18, 2013, 12 NLP organisations have agreed to announce the establishment of the International Standard Language Resource Number (ISLRN), a Persistent Unique Identifier, to be assigned to each Language Resource. Experiment replicability, an essential feature of scientific work, would be enhanced by such unique identifier. Set up by ELRA, LDC and AFNLP/Oriental-COCOSDA, the ISLRN Portal will provide unique identifiers using a standardised nomenclature, as a service free of charge for all Language Resource providers. It will be supervised by a steering committee composed of representatives of participating organisations and enlarged whenever necessary.

More information on ELRA and the ISLRN, please contact: Khalid Choukri choukri@elda.org

More information on ELDA, please contact: Hélène Mazo mazo@elda.org

ELRA

55-57, rue Brillat Savarin

75013 Paris (France)

Tel.: +33 1 43 13 33 33

Fax: +33 1 43 13 33 30

Top

5-2-13ISLRN new portal
Opening of the ISLRN Portal
ELRA, LDC,  and AFNLP/Oriental-COCOSDA announce the opening of the ISLRN Portal @ www.islrn.org.


Further to the establishment of the International Standard Language Resource Number (ISLRN) as a unique and universal identification schema for Language Resources on November 18, 2013, ELRA, LDC and AFNLP/Oriental-COCOSDA now announce the opening of the ISLRN Portal (www.islrn.org). As a service free of charge for all Language Resource providers and under the supervision of a steering committee composed of representatives of participating organisations, the ISLRN Portal provides unique identifiers using a standardised nomenclature.

Overview
The 13-digit ISLRN format is: XXX-XXX-XXX-XXX-X. It can be allocated to any Language Resource; its composition is neutral and does not include any semantics in reference to the type or nature of the Language Resource. The ISLRN is a randomly created number with a check digit that validates a Verhoeff algorithm.

Two types of external players may interact with the ISLRN Portal: Visitors and Providers. Visitors may browse the web site and search for the ISLRN of a given Language Resource by its name or by its number if it exists. Providers are registered and own credentials. They can request a new ISLRN for a given Language Resource. A provider has the possibility to become certified, after moderation, in order to be able to import metadata in XML format.

The functionalities that can be accessed by Visitors are:

-          Identify a language resource according to its ISLRN
-          Identify an ISLRN by the name of a language resource
-          Get information about ISLRN, FAQ, Basic Metadata, Legal Information
-          View last 5 accepted resources (“What’s new” block on home page)
-          Sign up to become a provider

The functionalities that can be accessed by Providers, once they have signed up, are:

-          Log in
-          Request an ISLRN according to the metadata of a given resource
-          Request to become a certified provider so as to import XML files containing metadata
-          Import one or more metadata descriptions in XML to request ISLRN(s) (only for certified providers)
-          Edit pending requests
-          Access previous requests
-          Contact a Moderator or an Administrator
-          Edit Providers’ own profile

ISLRN request is handled by moderators within 5 working days.
Contact: islrn@elda.org

Background
The International Standard Language Resource Number (ISLRN) is a unique and universal identification schema for Language Resources which provides Language Resources with unique identifier using a standardised nomenclature. It also ensures that Language Resources are correctly identified, and consequently, recognised with proper references for their usage in applications in R&D projects, products evaluation and benchmark as well as in documents and scientific papers. Moreover, it is a major step in the interconnected world that Human Language Technologies (HLT) has become: unique resources must be identified as they are and meta-catalogues need a common identification format to manage data correctly.

The ISLRN does not intend to replace local and specific identifiers, it is not meant to be a legal deposit, not an obligation, but rather an essential and best practice. For instance a resource that is distributed by several data centres will still have the “local” data-centre identifier but will have a unique ISLRN.

********************************************************************
About ELRA
The European Language Resources Association (ELRA) is a non-profit making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting Human Language Technologies (HLT). To find out more about ELRA, please visit www.elra.info.

About LDC
Founded in 1992, the Linguistic Data Consortium (LDC) is an open consortium of universities, companies and government research laboratories. It creates, collects and distributes speech and text databases, lexicons, and other resources for research and development purposes. The University of Pennsylvania is the LDC's host institution. To find out more about LDC, please visit www.ldc.upenn.edu.

About AFNLP
The mission of the Asian Federation of Natural Language Processing (AFNLP) is to promote and enhance R&D relating to the computational analysis and the automatic processing of all languages of importance to the Asian region by assisting and supporting like-minded organizations and institutions through information sharing, conference organization, research and publication co-ordination, and other forms of support. To find out more about AFNLP, please visit www.afnlp.org.

About Oriental-COCOSDA
The International Committee for the Co-ordination and Standardisation of Speech Databases and Assesment Techniques, Oriental-COCOSDA, has been established to encourage and promote international interaction and cooperation in the foundation areas of Spoken Language Processing, especially for Speech Input/Output. To find out more about Oriental-COCOSDA, please visit our web site: www.cocosda.org

Top

5-2-14Speechocean – update (February 2015)

Speechocean – update (Feb 2015):

 

Speechocean: A global language resources and data services supplier

 

Speechocean has over 500 large-scale databases available in 110+ languages and accents with the platform of desktop, in-car, telephony and tablet PC. Our data repository is enormous and diversified, which includes ASR Databases, TTS Databases, Lexica, Text Corpora, etc.

 

Speechocean is glad to announce more resources that have been released:

ASR Databases

Speechocean provides 110+ regional languages corpora, available in a variety of formats, situational styles, scene environments and platform systems, covering In-car speech recognition corpora, mobile phone speech recognition corpora, fixed-line speech recognition corpora, desktop speech recognition corpora, etc. This month we are glad to introduce our most popular databases which were made for the tuning and testing purpose of speech recognition systems for speech ASR applications.

    1. In-Car

Serial Number

Kingline Data Names

Sound Parameter

King-ASR-125

Japanese Speech Recognition Database -(In car) 300 Speakers

48k,16bit

Four Channels

King-ASR-120

Chinese Mandarin Speech Recognition Database-(in car )1200 Speakers

16 K16 bit

Four Channels



    1. Mobile

Serial Number

Kingline Data Names

Sound Parameter

King-ASR-216

Chinese Mandarin Speech Recognition Database-Sentences (Mobile)--(5048 speakers)

16K,16bit

One Channel

King-ASR-044

Taiwanese Speech Recognition Database—(Mobile)--654 speakers

16K,16bit

One Channel

King-ASR-113

Chinese Mandarin Speech Recognition Database ----(Mobile)-4000 Speakers

16K,16bit

One Channel

 

    1. Telephony

Serial Number

Kingline Data Names

Sound Parameter

King-ASR-222

Japanese Speech Recognition Database ----
spontaneous dialog (Telephony)-200 Speakers

8k,16bit

King-ASR-027

Chinese Mandarin Speech Recognition Database ---- Spontaneous Speech (Telephone)-649 Speakers

8k,16bit

 

    1. Desktop

      Serial Number

      Kingline Data Names

      Sound Parameter

      King-ASR-087

      Taiwanese Speech Recognition Database ----
      Sentences (Desktop)-200 Speakers

      44.1k,16bit

      King-ASR-111

      Mandarin Speech Recognition Database ----
      spontaneous dialog (Desktop)-1013 Speakers

      44.1k,16bit

      King-ASR-175

      Japanese Speech Recognition Database ----
      Sentences (Desktop)-505 Speakers

      44.1k,16bit

  1. TTS Databases

Speechocean licenses a variety of databases in more than 40 languages for speech synthesis broadcasting speech, emotional speech, etc. which can be used in different algorithms.

Serial No.

Kingline Data Names

Sound Parameter

Recording Hours

King-TTS-020

Russian Speech Synthesis Database (Male)

44.1K,16bit
Two Channels

12.3

King-TTS-027

Taiwanese Speech Synthesis Database (Female)

44.1K,16bit
Two Channels

9.8

King-TTS-030

British English Speech Synthesis Database (Female)

44.1K,16bit
Two Channels

12.6

 

 

  1. Text Corpora

Speechocean licenses many kinds of text corpora in many languages which is superb for language model training.

ID

Kingline Data Names

Size

King-NLP-022

Japanese Name Variants Corpus

4000000 Words

King-NLP-023

Japanese Lexical Database

290000 Words

King-NLP-034

Japanese Organizations Names Corpus

580000 Words

 

  1. Lexica

Speechocean builds pronunciation lexica in many languages which can be licensed to customers.

No.

Name

Phoneset

King-Lexicon-032

Urdu Pronunciation Lexicon

SAMPA

King-Lexicon-033

Vietnamese Pronunciation Lexicon

SO

King-Lexicon-037

Brazilian Portugese Pronunciation Lexicon

SAMPA

 

Contact Information

Xianfeng Cheng

Business Manager of Commercial Department

Tel: +86-10-62660928; +86-10-62660053 ext.8080

Mobile: +86 13681432590

Skype: xianfeng.cheng1

Email: chengxianfeng@speechocean.com; cxfxy0cxfxy0@gmail.com

Website: www.speechocean.com

 

 

 

 

 

 

 

 




 

 

 

 

 

 

Top

5-2-15kidLUCID: London UCL Children’s Clear Speech in Interaction Database

kidLUCID: London UCL Children’s Clear Speech in Interaction Database

We are delighted to announce the availability of a new corpus of spontaneous speech for children aged 9 to 14 years inclusive, produced as part of the ESRC-funded project on ‘Speaker-controlled Variability in Children's Speech in Interaction’ (PI: Valerie Hazan).

Speech recordings (a total of 288 conversations) are available for 96 child participants (46M, 50F, range 9;0 to 15;0 years), all native southern British English speakers. Participants were recorded in pairs while completing the diapix spot-the-difference picture task in which the pair verbally compared two scenes, only one of which was visible to each talker. High-quality digital recordings were made in sound-treated rooms. For each conversation, a stereo audio recording is provided with each speaker on a separate channel together with a Praat Textgrid containing separate word- and phoneme-level segmentations for each speaker.

There are six recordings per speaker pair made in the following conditions:

  • NOB (No barrier): both speakers heard each other normally

  • VOC (Vocoder): one conversational partner heard the other's speech after it had been processed in real time through a noise-excited three channel vocoder

  • BAB (Babble): one conversational partner heard the other's speech in a background of adult multi-talker babble at an approximate SNR of 0 dB.

The kidLUCID corpus is available online within the OSCAAR (Online Speech/Corpora Archive and Analysis Resource) archive (https://oscaar.ci.northwestern.edu/). Free access can be requested for research purposes. Further information about the project can be found at: http://www.ucl.ac.uk/pals/research/shaps/research/shaps/research/clear-speech-strategies

This work was supported by Economic and Social Research Council Grant No. RES-062- 23-3106.

Top

5-2-16Robust speech datasets and ASR software tools


We are happy to announce the release of a table of 44 publicly available robust speech processing datasets and a table of 4 ASR software tools on the wiki of ISCA's Robust Speech Processing SIG:
https://wiki.inria.fr/rosp/Datasets#Speech_datasets
https://wiki.inria.fr/rosp/Software#Automatic_speech_recognition

We hope that these tables will promote wider dissemination of the datasets and software tools available in our community and help newcomers select the most suitable dataset or software for a given experiment. We plan to provide additional tables on, e.g., room impulse response datasets or speaker recognition software in the future.

We highly welcome your input, especially additional tables/entries and reproducible baselines for each dataset. It just takes a few minutes thanks to the simple wiki interface.

For more information about joining the SIG and contributing, see
https://wiki.inria.fr/rosp/

Jonathan Le Roux, Emmanuel Vincent, and Ramon Astudillo

Top

5-2-17International Standard Language Resource Number (ISLRN) implemented by ELRA and LDC

ELRA and LDC partner to implement ISLRN process and assign identifiers to all the Language Resources in their catalogues.

 

Following the meeting of the largest NLP organizations, the NLP12, and their endorsement of the International Standard Language Resource Number (ISLRN), ELRA and LDC partnered to implement the ISLRN process and to assign identifiers to all the Language Resources (LRs) in their catalogues. The ISLRN web portal was designed to enable the assignment of unique identifiers as a service free of charge for all Language Resource providers. To enhance the use of ISLRN, ELRA and LDC have collaborated to provide the ISLRN 13-digit ID to all the Language Resources distributed in their respective catalogues. Anyone who is searching the ELRA and LDC catalogues can see that each Language Resource is now identified by both the data centre ID and the ISLRN number. All providers and users of such LRs should refer to the latter in their own publications and whenever referring to the LR.

 

ELRA and LDC will continue their joint involvement in ISLRN through active participation in this web service.

 

Visit the ELRA and LDC catalogues, respectively at http://catalogue.elra.info and https://catalog.ldc.upenn.edu

 

Background

The International Standard Language Resource Number (ISLRN) aims to provide unique identifiers using a standardised nomenclature, thus ensuring that LRs are correctly identified, and consequently, recognised with proper references for their usage in applications within R&D projects, product evaluation and benchmarking, as well as in documents and scientific papers. Moreover, this is a major step in the networked and shared world that Human Language Technologies (HLT) has become: unique resources must be identified as such and meta-catalogues need a common identification format to manage data correctly.

The ISLRN portal can be accessed from http://www.islrn.org,

 

***About NLP12***

Representatives of the major Natural Language Processing and Computational Linguistics organizations met in Paris on 18 November 2013 to harmonize and coordinate their activities within the field.
The results of this coordination are expressed in the Paris Declaration: http://www.elra.info/NLP12-Paris-Declaration.html.

 

*** About ELRA ***
The European Language Resources Association (ELRA) is a non-profit making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting Human Language Technologies (HLT).
To find out more about ELRA, please visit our web site: http://www.elra.info

*** About LDC ***

The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and research laboratories that creates and distributes linguistic resources for language-related education, research and technology development.

To find out more about LDC, please visit our web site: https://www.ldc.upenn.edu


For more information, please contact: admin@islrn.org

Top

5-2-18ISLRN adopted by Joint Research Center (JRC) of the European Commission

JRC, the EC's Joint Research Centre, an important LR player: First to adopt the ISLRN initiative

 

The Joint Research Centre (JRC), the European Commission's in house science service, is the first organisation to use the International Standard Language Resource Number (ISLRN) initiative and has requested ISLRN 13-digit unique identifiers to its Language Resources (LR).
Thus, anyone who is using JRC LRs may now refer to this number in their own publications.

 

The current JRC LRs (downloadable from https://ec.europa.eu/jrc/en/language-technologies) with an ISLRN ID are:

 

 

Background

The International Standard Language Resource Number (ISLRN) aims to provide unique identifiers using a standardised nomenclature, thus ensuring that LRs are correctly identified, and consequently, recognised with proper references for their usage in applications within R&D projects, product evaluation and benchmarking, as well as in documents and scientific papers. Moreover, this is a major step in the networked and shared world that Human Language Technologies (HLT) has become: unique resources must be identified as such and meta-catalogues need a common identification format to manage data correctly.
The ISLRN portal can be accessed from http://www.islrn.org,

 

*** About the JRC ***

As the Commission's in-house science service, the Joint Research Centre's mission is to provide EU policies with independent, evidence-based scientific and technical support throughout the whole policy cycle.
Within its research in the field of global security and crisis management, the JRC develops open source intelligence and analysis systems that can automatically harvest and analyse a huge amount of multi-lingual information from the internet-based sources. In this context, the JRC has developed Language Technology resources and tools that can be used for highly multilingual text analysis and cross-lingual applications.
To find out more about JRC's research in open source information monitoring, please visit https://ec.europa.eu/jrc/en/research-topic/internet-surveillance-systems. To access media monitoring applications directly, go to http://emm.newsbrief.eu/overview.html.

 

*** About ELRA ***
The European Language Resources Association (ELRA) is a non-profit making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting Human Language Technologies (HLT).
To find out more about ELRA, please visit our web site: http://www.elra.info

For more information, contact admin@ilsrn.org
 

Top

5-2-19ELRA News

We are happy to announce that 1 new Written Corpus and 1 new Terminological Resource are now available in our catalogue.

ELRA-W0081 Khresmoi manually annotated reference corpus
ISLRN: 764-036-829-417-7
This corpus is a collection of Khresmoi English web documents annotated with key entities (such as disease, drug). The corpus is divided into two parts:
1. The initial corpus: 625 documents from the Genetics Home Reference data set, automatically annotated with anatomical locations and diseases, and manually corrected by 3-4 annotators. Size of documents: between 26 and 8,306 tokens each.
2. The main corpus: 6,950 English documents from the Khresmoi crawl and 5,518 English Wikipedia pages, automatically annotated through the GATE Platform for Anatomy, Disease, Drug and Investigation. Size of documents: between 200 and 2,000 tokens each.
The corpus is using the GATE XML format.
For more information, see: http://catalog.elra.info/product_info.php?products_id=1237

 

ELRA-T0375 ACL RD-TEC: A Reference Dataset for Terminology Extraction and Classification Research in Computational Linguistics
ISLRN: 699-305-362-089-6
This is a reference dataset for terminology extraction and classification research in computational linguistics. It is a set of manually annotated terms in English language that are extracted from the ACL Anthology Reference Corpus (ACL ARC). This dataset, called ACL RD-TEC, is comprised of more than 69,000 candidate terms that are manually annotated as valid and invalid terms. Furthermore, valid terms are classified as technology and non-technology terms.
For more information, see: http://catalog.elra.info/product_info.php?products_id=1236

 

 

For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org

 

 

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements/


Follow us on Twitter:
@ELRANews

Top

5-2-20Forensic database of voice recordings of 500+ Australian English speakers

Forensic database of voice recordings of 500+ Australian English speakers

We are pleased to announce that the forensic database of voice recordings of 500+ Australian English speakers is now published.

The database was collected by the Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales as part of the Australian Research Council funded Linkage Project on making demonstrably valid and reliable forensic voice comparison a practical everyday reality in Australia. The project was conducted in partnership with: Australian Federal Police,  New South Wales Police,  Queensland Police, National Institute of Forensic Sciences, Australasian Speech Sciences and Technology Association, Guardia Civil, Universidad Autónoma de Madrid.

The database includes multiple non-contemporaneous recordings of most speakers. Each speaker is recorded in three different speaking styles representative of some common styles found in forensic casework. Recordings are recorded under high-quality conditions and extraneous noises and crosstalk have been manually removed. The high-quality audio can be processed to reflect recording conditions found in forensic casework.

The database can be accessed at: http://databases.forensic-voice-comparison.net/

Top

5-2-21Audio and Electroglottographic speech recordings

 

Audio and Electroglottographic speech recordings from several languages

We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'.

http://www.phonetics.ucla.edu/voiceproject/voice.html

Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project.

Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets.

All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License.

This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College.

Pat Keating (UCLA)

Top

5-3 Software
5-3-1ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Top

5-3-2VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Top

5-3-3Bob signal-processing and machine learning toolbox (v.1.2..0)


    The release 1.2.0 of the Bob
      signal-processing and machine learning toolbox
is available .

    Bob provides both efficient implementations of several machine     learning algorithms as well as a framework to help researchers to     publish reproducible research.
   
   

It is developed by the Biometrics
        Group
at Idiap in       Switzerland.

   
    The previous release of Bob was providing:
    * image, video and audio IO
      interfaces
such as jpg, avi, wav, 

    * database
      accessors
such as FRGC, Labelled Face in the Wild, and many     others,

    *       image processing: Local Binary Patterns (LBPs), Gabor Jets,     SIFT,
    * machines
      and trainers
such as Support Vector Machines (SVMs), k-Means,     Gaussian Mixture Models (GMMs), Inter-Session Variability modeling     (ISV), Joint Factor Analysis (JFA), Probabilistic Linear     Discriminant Analysis (PLDA), Bayesian intra/extra (personal)     classifier,

   
    The new release of Bob has brought the following features and/or     improvements, such as:
    * Unified implementation of Local Binary Patterns (LBPs),
    * Histograms of Oriented Gradients (HOG) implementation,
    * Total variability (i-vector) implementation,
    * Conjugate gradient based-implementation for logistic regression,
    * Improved multi-layer perceptrons implementation (Back-propagation     can now be easily used in combination with any optimizer -- i.e     L-BFGS),
    * Pseudo-inverse-based method for Linear Discriminant Analysis,
    * Covariance-based method for Principal Component Analysis,
    * Whitening and within-class covariance normalization techniques,
    * Module for object detection and keypoint localization     (bob.visioner),
    * Module for       audio processing including feature extraction such as LFCC and     MFCC,
    * Improved extensions (satellite packages), that now support both     Python and C++ code, within an easy to use framework,
    * Improved documentation and add new tutorials,
    * Support for Intel's MKL (in addition to ATLAS),
    * Extend supported platforms (Arch Linux).
   
    This release represents a major milestone in Bob with plenty of     functionality improvements (>640       commits in total) and plenty of bug
      fixes
.

    • Sources and       Documentation
    • Binary packages:
    •     Ubuntu: 10.04, 12.04, 12.10 and 13.04
    • For     Mac OSX: works with 10.6 (Snow Leopard), 10.7 (Lion) and 10.8     (Mountain Lion)
   
    For instructions on how to install pre-packaged version on Ubuntu or     OSX, consult our quick       installation instructions  (N.B. OS X macport has not yet been     upgraded. This will be done very soon. cf. https://trac.macports.org/ticket/39831 ).
   
   
    Best regards,
    Elie Khoury (on Behalf of the Biometric
      Group at Idiap
lead by Sebastien
      Marcel
)

   
     
     ---    

-- ------------------- Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
Top

5-3-4COVAREP: A Cooperative Voice Analysis Repository for Speech Technologies
======================
CALL for contributions
======================
 
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
 
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
 
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
 
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
 
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
 
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
  • Only published work can be added to the   repository
  • The code must be available as open source
  • Algorithms should be coded in Matlab, however we   strongly encourage authors to make the code compatible with Octave in order to   maximize usability
  • Contributions have to comply with a Coding   Convention (see GitHub site for coding convention and template). However, only   for normalizing the inputs/outputs and the documentation. There is no   restriction for the content of the functions (though, comments are obviously   encouraged).
 
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
 
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
 
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
 
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
 
 
Top

5-3-5Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox).
Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox). http://bass-db.gforge.inria.fr/fasst/ This toolbox is intended to speed up the conception and to automate the implementation of new model-based audio source separation algorithms. It has the following additions compared to version 1: * Core in C++ * User scripts in MATLAB or python * Speedup * Multichannel audio input We provide 2 examples: 1. two-channel instantaneous NMF 2. real-world speech enhancement (2nd CHiME Challenge, Track 1)
Top

5-3-6Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures.

We are glad to announce the public realease of the Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures.


It can be used e.g. for making music or for singing voice pedagogy.

A wide variety of voices are available, from the classic vocal quartet (soprano, alto, tenor, bass), to the extreme colors of childish, breathy, roaring, etc. voices.  All the features of vocal sounds are entirely under control, as the synthesis method is based on a mathematic model of voice production, without prerecording segments.

The instrument is controlled using chironomy, i.e. hand gestures, with the help of interfaces like stylus or fingers on a graphic tablet, or computer mouse. Vocal dimensions such as the melody, vocal effort, vowel, voice tension, vocal tract size, breathiness etc. can easily and continuously be controlled during performance, and special voices can be prepared in advance or using presets.

Check out the capabilities of Cantor Digitalis, through performances extracts from the ensemble Chorus Digitalis:
http://youtu.be/_LTjM3Lihis?t=13s.

In pratice, this release provides:
  • the synthesizer application
  • the source code in the form of a Max package (GPL-like license)
  • a documentation for the musician and another for the developper
What do you need ?
  • a Mac OSX
  • ideally a Wacom graphic tablet, but it also works with your computer mouse
  • for the developers, the Max software
Interested ?
  • To download the Cantor Digitalis, click here
  • To subscribe to the Cantor Digitalisnewsletter and/or the forum list, or to contact the developers, click here
  • To learn about the Chorus Digitalis, ensemble of Cantor Digitalisand watch videos of performances, click here
  • For more details about the Cantor Digitalis, click here
 
Regards,
 
The Cantor Digitalis team (who loves feedback — cantordigitalis@limsi.fr)
Christophe d'Alessandro, Lionel Feugère, Olivier Perrotin
http://cantordigitalis.limsi.fr/
Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA