ISCA - International Speech
Communication Association

ISCApad Archive  »  2022  »  ISCApad #295  »  Resources  »  Database

ISCApad #295

Monday, January 09, 2023 by Chris Wellekens

5-2 Database
5-2-1Linguistic Data Consortium (LDC) update (December 2022)


In this newsletter:
LDC 2023 membership discounts now available 
Approaching deadline for Spring 2023 data scholarship applications
30th Anniversary Highlight: AMR 

New publications:
CAMIO Transcription Languages
Global TIMIT Thai
Third DIHARD Challenge Evaluation

LDC 2023 membership discounts now available 
Now through March 1, 2023, current 2022 members receive a 10% discount for renewing their membership, and new or returning organizations receive a 5% discount. Membership remains the most economical way to access current and past LDC releases. Consult Join LDC for details on membership options and benefits. 

Approaching deadline for Spring 2023 data scholarship applications
Attention students: don’t miss out on the chance to receive no-cost access to LDC data for your research. Applications for Spring 2023 data scholarships are due January 15, 2023. For more information on requirements and program rules, see LDC Data Scholarships

30th Anniversary Highlight: AMR 
Abstract Meaning Representation (AMR) annotation was developed by LDC, SDL/Language Weaver, Inc., the University of Colorado's Computational Language and Educational Research group, and the Information Sciences Institute at the University of Southern California. It is a semantic representation language that captures 'who is doing what to whom' in a sentence. Each sentence is paired with a graph that represents its whole-sentence meaning in a tree-structure. AMR utilizes PropBank frames, non-core semantic roles, within-sentence coreference, named entity annotation, modality, negation, questions, quantities, and so on to represent the semantic structure of a sentence largely independent of its syntax.

LDC’s Catalog contains three cumulative English AMR publications: Release 1.0 (LDC2014T12), Release 2.0 (LDC2017T10), and Release 3.0  (LDC2020T02). The combined result in AMR 3.0 is a semantic treebank of roughly 59,255 English natural language sentences from broadcast conversations, newswire, weblogs, web discussion forums, fiction, and web text, and includes multi-sentence annotations.

LDC has also published Chinese Abstract Meaning Representation 1.0 (LDC2019T07) and 2.0 (LDC2021T13), developed by Brandeis University and Nanjing Normal University. These corpora contain AMR annotations for approximately 20,000 sentences from Chinese Treebank 8.0 (LDC2013T21). Chinese AMR follows the basic principles developed for English, making adaptations where necessary to accommodate Chinese phenomena.

Abstract Meaning Representation 2.0 - Four Translations (LDC2020T07), developed by the University of Edinburgh, School of Informatics, consists of Spanish, German, Italian, and Chinese Mandarin translations of a subset of sentences from AMR 2.0.

Visit LDC’s Catalog for more details about these publications.  

New publications:
CAMIO Transcription Languages was developed by LDC and contains nearly 70,000 images of machine printed text with corresponding annotations and transcripts in 13 languages: Arabic, Chinese, English, Farsi, Hindi, Japanese, Kannada, Korean, Russian, Tamil, Thai, Urdu, and Vietnamese. This corpus is a subset of data created for a broader effort to support the development and evaluation of optical character recognition and related technologies for 35 languages across 24 unique script types.

Most images were annotated for text localization, resulting in over 2.3M line-level bounding boxes; 1250 images per language were also annotated with orthographic transcriptions of each line plus specification of reading order, yielding over 2.4M tokens of transcribed text. The resulting annotations are represented in an XML output format defined for this corpus. Data for each language is partitioned into test, train, or validation sets.

2022 members can access this corpus through their LDC accounts. Non-members may license this data for $2000.


Global TIMIT Thai consists of 12 hours of read speech and time-aligned transcripts in Standard Thai from 50 speakers (33 female, 17 male) reading 120 sentences selected from the Thai National Corpus, the Thai Junior Encyclopedia, and Thai Wikipedia, for a total of 6000 utterances. Data was collected in 2016. Speakers were recruited in the Bangkok metropolitan area; they were native Thais, fluent in Standard Thai, and literate.

This data set was developed as part of LDC’s Global TIMIT project which aims to create a series of corpora in a variety of languages with a similar set of key features as in the original TIMIT Acoustic-Phonetic Continuous Speech Corpus (LDC93S1) which was designed for acoustic-phonetic studies and for the development and evaluation of automatic speech recognition systems.

2022 members can access this corpus through their LDC accounts. Non-members may license this data for $750.


Third DIHARD Challenge Evaluation was developed by LDC and contains 33 hours of English and Chinese speech data along with corresponding annotations used in support of the Third DIHARD Challenge.

The DIHARD third development and evaluation sets were drawn from diverse sources including monologues, map task dialogues, broadcast interviews, sociolinguistic interviews, meeting speech, speech in restaurants, clinical recordings, and amateur web videos. Annotations include diarization and segmentation.

2022 members can access this corpus through their LDC accounts. Non-members may license this data for $300.

To unsubscribe from this newsletter, log in to your LDC account and uncheck the box next to “Receive Newsletter” under Account Options; or contact LDC for assistance.

Membership Coordinator

Linguistic Data Consortium

University of Pennsylvania

T: +1-215-573-1275


M: 3600 Market St. Suite 810

      Philadelphia, PA 19104



5-2-2ELRA - Language Resources Catalogue - Update (November 2022)

We are happy to announce that 1 new written corpus is now available in our catalogue. Moreover, 4 speech resources are now available at reduced fees.


1) New Language Resource:

German Political Speeches Corpus

ISLRN: 381-445-879-769-5

This corpus consists of a collection of political speeches in German crawled from the online archive of the German presidency (Bundespraësident) and the Chancellery (Bundesregierung). For the German Presidency the speeches are available from July 1, 1984 to February 17, 2012 and the corpus contain a total of 1442 texts comprising 2 392 074 tokens. For the German Chancellery, the corpus contains a total of 1831 text comprising 3 891 588 tokens covering a period from December 11, 1998 to December 6, 2011. This corpus contains speeches from the Chancellor but also from other politicians.


2) Reduced fees for the following speech resources:

For more information on the catalogue or if you would like to enquire about having your resources distributed by ELRA, please contact us.

Visit the ELRA Catalogue of Language Resources
Visit the Universal Catalogue 
Archives of ELRA Language Resources Catalogue Updates











5-2-3Speechocean – update (August 2019)


English Speech Recognition Corpus - Speechocean


At present, Speechocean has produced more than 24,000 hours of English Speech Recognition Corpora, including some rare corpora recorded by kids. Those corpora were recorded by 23,000 speakers in total. Please check the form below:





American English



Indian English



British English



Australian English



Chinese (Mainland) English



Canadian English



Japanese English



Singapore English



Russian English



Romanian English



French English



Chinese (Hong Kong) English



Italian English



Portugal English



Spainish English



German English



Korean English



Indonesian English





If you have any further inquiries, please do not hesitate to contact us.













5-2-4Google 's Language Model benchmark
 Here is a brief description of the project.

'The purpose of the project is to make available a standard training and test setup for language modeling experiments.

The training/held-out data was produced from a download at using a combination of Bash shell and Perl scripts distributed here.

This also means that your results on this data set are reproducible by the research community at large.

Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:

  • unpruned Katz (1.1B n-grams),
  • pruned Katz (~15M n-grams),
  • unpruned Interpolated Kneser-Ney (1.1B n-grams),
  • pruned Interpolated Kneser-Ney (~15M n-grams)


Happy benchmarking!'


5-2-5Forensic database of voice recordings of 500+ Australian English speakers

Forensic database of voice recordings of 500+ Australian English speakers

We are pleased to announce that the forensic database of voice recordings of 500+ Australian English speakers is now published.

The database was collected by the Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales as part of the Australian Research Council funded Linkage Project on making demonstrably valid and reliable forensic voice comparison a practical everyday reality in Australia. The project was conducted in partnership with: Australian Federal Police,  New South Wales Police,  Queensland Police, National Institute of Forensic Sciences, Australasian Speech Sciences and Technology Association, Guardia Civil, Universidad Autónoma de Madrid.

The database includes multiple non-contemporaneous recordings of most speakers. Each speaker is recorded in three different speaking styles representative of some common styles found in forensic casework. Recordings are recorded under high-quality conditions and extraneous noises and crosstalk have been manually removed. The high-quality audio can be processed to reflect recording conditions found in forensic casework.

The database can be accessed at:


5-2-6Audio and Electroglottographic speech recordings


Audio and Electroglottographic speech recordings from several languages

We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'.

Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project.

Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets.

All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License.

This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College.

Pat Keating (UCLA)


5-2-7EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada

We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.


In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.


Data from 14 participants are available here:


If you have any questions, please contact Frank Rudzicz at


Best regards,




PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia

ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.



5-2-8TORGO data base free for academic use.

In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here:



Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.


Summary of the new datasets (2018) and a brief plan for 2019.




? Speech data (with annotation) that we finished in 2018 


Datasets Length
  ( Hours )
British English
Spanish (Mexico/Colombia)
Brazilian Portuguese
European Portuguese


?2019 ongoing  speech project 



Project Name

Europeans speak English

1000 Hours-Spanish Speak English

1000 Hours-French Speak English

1000 Hours-German Speak English

Call Center Speech

1000 Hours-Call Center Speech

off-the-shelf data expansion

1000 Hours-Chinese Speak English

1500 Hours-Mixed Chinese and English Speech Data




On top of the above,  there are more planed speech data collections, such as Japanese speech data, children`s speech data, dialect speech data and so on.  


What is more, we will continually provide those data at a competitive price with a maintained high accuracy rate.




If you have any questions or need more details, do not hesitate to contact us 


It would be possible to send you with a sample or specification of the data.





5-2-10SIWIS French Speech Synthesis Database
The SIWIS French Speech Synthesis Database includes high quality French speech recordings and associated text files, aimed at building TTS systems, investigate multiple styles, and emphasis. A total of 9750 utterances from various sources such as parliament debates and novels were uttered by a professional French voice talent. A subset of the database contains emphasised words in many different contexts. The database includes more than ten hours of speech data and is freely available.

5-2-11JLCorpus - Emotional Speech corpus with primary and secondary emotions
JLCorpus - Emotional Speech corpus with primary and secondary emotions:

For further understanding the wide array of emotions embedded in human speech, we are introducing an emotional speech corpus. In contrast to the existing speech corpora, this corpus was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions along with 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI), where the aim is to model natural conversations among humans and robots. But there are very few existing speech resources to study these emotions,and this work adds a speech corpus containing some secondary emotions.

Please use the corpus for emotional speech related studies. When you use it please include the citation as:

Jesin James, Li Tian, Catherine Watson, 'An Open Source Emotional Speech Corpus for Human Robot Interaction Applications', in Proc. Interspeech, 2018.

To access the whole corpus including the recording supporting files, click the following link:, (if you have already installed the Kaggle API, you can type the following command to download: kaggle datasets download -d tli725/jl-corpus)

Or if you simply want the raw audio+txt files, click the following link:

The corpus was evaluated by a large scale human perception test with 120 participants. The link to the survey are here- For Primary emorion corpus:

For Secondary emotion corpus:

These surveys will give an overall idea about the type of recordings in the corpus.

The perceptually verified and annotated JL corpus will be given public access soon.


5-2-12OPENGLOT –An open environment for the evaluation of glottal inverse filtering

OPENGLOT –An open environment for the evaluation of glottal inverse filtering


OPENGLOT is a publically available database that was designed primarily for the evaluation of glottal inverse filtering algorithms. In addition, the database can be used in evaluating formant estimation methods. OPENGLOT consists of four repositories. Repository I contains synthetic glottal flow waveforms, and speech signals generated by using the Liljencrants–Fant (LF) waveform as an excitation, and an all-pole vocal tract model. Repository II contains glottal flow and speech pressure signals generated using physical modelling of human speech production. Repository III contains pairs of glottal excitation and speech pressure signal generated by exciting 3D printed plastic vocal tract replica with LF excitations via a loudspeaker. Finally, Repository IV contains multichannel recordings (speech pressure signal, EGG, high-speed video of the vocal folds) from natural production of speech.


OPENGLOT is available at:


5-2-13Corpus Rhapsodie

Nous sommes heureux de vous annoncer la publication d¹un ouvrage consacré
au treebank Rhapsodie, un corpus de français parlé de 33 000 mots
finement annoté en prosodie et en syntaxe.

Accès à la publication : (voir flyer

Accès au treebank :
Les données librement accessibles sont diffusées sous licence Creative
Le site donne également accès aux guides d¹annotations.


5-2-14The My Science Tutor Children?s Conversational Speech Corpus (MyST Corpus) , Boulder Learning Inc.

The My Science Tutor Children?s Conversational Speech Corpus (MyST Corpus) is the world?s largest English children?s speech corpus.  It is freely available to the research community for research use.  Companies can acquire the corpus for $10,000.  The MyST Corpus was collected over a 10-year period, with support from over $9 million in grants from the US National Science Foundation and Department of Education, awarded to Boulder Learning Inc. (Wayne Ward, Principal Investigator).

The MyST corpus contains speech collected from 1,374 third, fourth and fifth grade students.  The students engaged in spoken dialogs with a virtual science tutor in 8 areas of science.  A total of 11,398 student sessions of 15 to 20 minutes produced a total of 244,069 utterances.  42% of the utterances have been transcribed at the word level.  The corpus is partitioned into training and test sets to support comparison of research results across labs. All parents and students signed consent forms, approved by the University of Colorado?s Institutional Review Board,  that authorize distribution of the corpus for research and commercial use. 

The MyST children?s speech corpus contains approximately ten times as many spoken utterances as all other English children?s speech corpora combined (see 

Additional information about the corpus, and instructions for how to acquire the corpus (and samples of the speech data) can be found on the Boulder Learning Web site at   


5-2-15HARVARD speech corpus - native British English speaker
  • HARVARD speech corpus - native British English speaker, digital re-recording

5-2-16Magic Data Technology Kid Voice TTS Corpus in Mandarin Chinese (November 2019)

Magic Data Technology Kid Voice TTS Corpus in Mandarin Chinese


Magic Data Technology is one of the leading artificial intelligence data service providers in the world. The company is committed to providing a wild range of customized data services in the fields of speech recognition, intelligent imaging and Natural Language Understanding.


This corpus was recorded by a four-year-old Chinese girl originally born in Beijing China. This time we published 15-minute speech data from the corpus for non-commercial use.


The contents and the corresponding descriptions of the corpus:

  • The corpus contains 15 minutes of speech data, which is recorded in NC-20 acoustic studio.

  • The speaker is 4 years old originally born in Beijing

  • Detail information such as speech data coding and speaker information is preserved in the metadata file.

  • This corpus is natural kid style.

  • Annotation includes four parts: pronunciation proofreading, prosody labeling, phone boundary labeling and POS Tagging.

  • The annotation accuracy is higher than 99%.

  • For phone labeling, the database contains the annotation not only on the boundary of phonemes, but also on the boundary of the silence parts.


The corpus aims to help researchers in the TTS fields. And it is part of a much bigger dataset (2.3 hours MAGICDATA Kid Voice TTS Corpus in Mandarin Chinese) which was recorded in the same environment. This is the first time to publish this voice!


Please note that this corpus has got the speaker and her parents’ authorization.


Samples are available.

Do not hesitate to contact us for any questions.




5-2-17FlauBERT: a French LM
Here is FlauBERT: a French LM learnt (with #CNRS J-Zay supercomputer) on a large and heterogeneous corpus. Along with it comes FLUE (evaluation setup for French NLP). FlauBERT was successfully applied to complex tasks (NLI, WSD, Parsing).  More on
More details on this online paper: 

5-2-18ELRA-S0408 SpeechTera Pronunciation Dictionary

ELRA-S0408 Speechtera Pronunciation Dictionary

ISLRN: 645-563-102-594-8
The SpeechTera Pronunciation Dictionary is a machine-readable pronunciation dictionary for Brazilian Portuguese and comprises 737,347 entries. Its phonetic transcription is based on 13 linguistics varieties spoken in Brazil and contains the pronunciation of the frequent word forms found in the transcription data of the SpeechTera's speech and text database (literary, newspaper, movies, miscellaneous). Each one of the thirteen dialects comprises 56,719 entries.
For more information, see:

For more information on the catalogue, please contact Valérie Mapelli

If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us.

Visit our On-line Catalogue:
Visit the Universal Catalogue:
Archives of ELRA Language Resources Catalogue Updates:


5-2-19Ressources of ELRC Network

Paris, France, April 23, 2020

ELRA is happy to announce that Language Resources collected within the ELRC Network, funded by the European Commission, are now available from the ELRA Catalogue of Language Resources.

In total, 180 Written Corpora, 5 Multilingual Lexicons and 2 Terminological Resources, are freely available under open licences and can be downloaded directly from the catalogue. Type 'ELRC' in the catalogue search engine ( to access and download resources.

All these Language Resources can be used to support your Machine Translation solutions developments. They cover the official languages of the European Union and CEF associated countries.

More LRs coming from ELRC will be added as they become available.

About ELRC
ELRC (European Language Resources Coordination) Network raises awareness and promote the acquisition and continued identification and collection of language resources in all official languages of the EU and CEF associated countries. These activities aim to help to improve the quality, coverage and performance of automated translation solutions in the context of current and future CEF digital services.

To find out more about ELRC, please visit the website:

About ELRA
The European Language Resources Association (ELRA) is a non-profit-making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for Language Resources and promoting Human Language Technologies. Language Resources covering various fields of HLT (including Multimodal, Speech, Written, Terminology) and a great number of languages are available from the ELRA catalogue. ELRA's strong involvement in the fields of Language Resources  and Language Technologies is also emphasized at the LREC conference, organized every other year since 1998.

To find out more about ELRA, please visit the website:

For more information on the catalogue, please contact Valérie Mapelli
If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us.

Visit our On-line Catalogue:
Visit the Universal Catalogue:
Archives of ELRA Language Resources Catalogue Updates:


5-2-20ELRA announces that MEDIA data are now available for free for academic research

ELRA announces that MEDIA data are now available for free for academic research

Further to the request of the HLT French community to foster evaluation activities for man-machine dialogue systems for French language, ELRA has decided to provide a free access to the MEDIA speech corpora and evaluation package for academic research purposes.

The MEDIA data can be found in the ELRA Catalogue under the following references:

Data available from the ELRA Catalogue can be obtained easily by contacting ELRA.  

The MEDIA project was carried out within the framework of Technolangue, the French national research programme funded by the French Ministry of Research and New Technologies (MRNT) with the objective of running a campaign for the evaluation of man-machine dialogue systems for French. The campaign was distributed over two actions: an evaluation taking into account the dialogue context and an evaluation not taking into account the dialogue context.

PortMedia was a follow up project supported by the French Research Agency (ANR). The French and Italian corpus was produced by ELDA, with the same paradigm and specifications as the MEDIA speech database but on a different domain.

For more information and/or questions, please write to

 *** About ELRA ***
The European Language Resources Association (ELRA) is a non-profit making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting Human Language Technologies (HLT).

To find out more about ELRA and its respective catalogue, please visit: and


5-2-21ELRA/ELDA Communication : LT4all

Out of the 7,000+ language spoken around the world, only a few have associated Language Technologies. The majority of languages can be considered as 'under-resourced' or as 'not supported'. This situation, very detrimental to many languages speakers, and specifically indigenous languages speakers, creates a digital divide,  and places many languages in danger of digital extinction, if not complete extinction.

Organized as part of the 2019 International Year of Indigenous Languages, the 1st edition of LT4All (Language Technologies for All: Enabling Linguistic Diversity and Multilingualism Worldwide) took place in Paris at the UNESCO Headquarters on December 4-6, 2019 and gathered 400 participants from various backgrounds (including language science and technology researchers, linguists, industrials, indigenous peoples, language policy and decision makers) from all over the world.

The LT4All Programme and Editorial Committees are very happy to announce that the set of Research Papers and Posters collected at the occasion of LT4All is now available online at :

LT4All has  been made possible thanks to the close cooperation between UNESCO, the  Government of the Khanty-Mansiysk Autonomous Okrug-Ugra (Russian Federation), the European Language Resources Association (ELRA) and its Special Interest  Group on Under-resourced  languages  (SIGUL), and in partnership with UNESCO Intergovernmental Information for All Programme (IFAP) and the Interregional Library Cooperation Centre, as well as with support of other public organizations and sponsors.

More information including the list of all the sponsors and supporters @


5-2-22Search and Find ELRA LRs on Google Dataset Search and ELG LT Platform

Search and Find ELRA LRs on Google Dataset Search and ELG LT Platform

ELRA is happy to announce that all the Language Resources from its Catalogue can now be searched and found on Google Dataset Search and on the ELG Language Technology platform developed within the European Language Grid project.

In order to allow the indexing by Google Dataset Search, ELRA has updated the code generating the catalogue pages. The code developed follows the standard and is publicly available in JSON format so that it can be used for other harvesting purposes.

The ELRA Catalogue is already indexed and harvested by famous repositories and archives such as OLAC (Open Language Archives Community), CLARIN Virtual Language Observatory and META-SHARE.

For 25 years now, ELRA has been distributing Language Resources to support research and development in various fields of Human Language Technology. The indexing on both Google Dataset Search and the ELG LT Plaform is increasing ELRA Catalogue?s visibility, making the LRs known to new visitors from the Human Language Technologies, Artificial Intelligence and other related fields.

*** About ELRA ***

The European Language Resources Association (ELRA) is a non-profit making organisation founded by the European Commission in 1995, with the mission of providing a clearing house for language resources and promoting Human Language Technologies (HLT).

ELRA Catalogue of Language Resources:

More about ELRA, please visit:

*** About Google Dataset Search ***

Google Dataset Search is a search engine for datasets that enables users to search through a list of data repositories indexed through a standardised schema.

More about Google Dataset Search:

*** About European Language Grid ***

The European Language Grid (ELG) is a project funded by the European Union through the Horizon 2020 research and innovation programme. It aims to be a primary platform for Language Technology in Europe.

More about the European Language Grid project:


5-2-23Sharing language resources (ELRA)

ELRA recognises the importance of sharing Language Resources (LRs) and making them available to the community.

Since the 2014 edition of LREC, the Language Resources and Evaluation Conference, participants have been offered the possibility to share their LRs (data, tools, web-services, etc.) when submitting a paper, uploading them in a special LREC repository set up by ELRA.

This effort of sharing LRs, linked to the LRE Map initiative ( for their description, contributes to creating a common repository where everyone can deposit and share data.

Despite the cancellation of LREC 2020 in Marseille, a high number of contributions was submitted and the LREC initiative 'Share your LRs' could be conducted to the end successfully.

Repositories corresponding to each edition of the conference are available here:

For more info and questions, please write to

5-2-24The UCLA Variability Speaker Database

With NSF support, our interdisciplinary team of voice research at UCLA recently put together a public database that we believe will be of interest to many members of the ISCA community. On behalf of my co-authors (Patricia Keating, Jody Kreiman, Abeer Alwan, Adam Chong), I'm writing to ask if we could advertise our database in the ISCA newsletter. We'd really appreciate your help with this.

The database, the UCLA Variability Speaker Database, is freely available through UCLA's Box cloud, which can be accessed from our lab website: I should mention that the database will also be available from the Linguistic Data Consortium (LDC) as of October, 2021.
Here's a brief description of the database.
The UCLA Variability Speaker Database comprises high-quality audio recordings from 202 speakers, 101 men 
and 101 women, performing 12 brief speech tasks in English over three recording sessions (total amount 
of speech: 300-450 sec per speaker). This public database was designed to sample variability in speaking 
within individual speakers and across a large number of speakers. The large set of speakers (similar in age) 
sampled from the current university community is gender-balanced and has a variety of language backgrounds. 
The database can serve as a testing ground for research questions involving between-speaker variability, 
within-speaker variability, and text-dependent variability. 
More details about the database are available in a readme file that can be sent on request.

--Cynthia Yoonjeong Lee
Postdoctoral Scholar, Department of Linguistics, UCLA

5-2-25Free databases in Catalan, Spanish and Arabic (ELRA and UPC Spain)

We are pleased to announce that Language Resources entrusted to ELRA for distribution and shared by the Universitat Politecnica de Catalunya (UPC), in Spain, are now available for free for academic research purposes (for ELRA institutional members) and at substantially decreased costs for commercial purposes. All data have been developed to enhance Speech technologies in Catalan, Spanish and Arabic.


The Language Resources can be found in the ELRA Catalogue under the following references:

ELRA-S0101 Spanish SpeechDat(II) FDB-1000 (ISLRN: 415-072-153-167-5)
For more information, see:
ELRA-S0102 Spanish SpeechDat(II) FDB-4000 (ISLRN: 295-399-069-106-4)
For more information, see:
ELRA-S0140 Spanish SpeechDat-Car database (ISLRN: 937-459-364-430-3)
For more information, see:
ELRA-S0141 SALA Spanish Venezuelan Database (ISLRN: 894-744-522-508-8)
For more information, see:
ELRA-S0173 SALA Spanish Mexican Database (ISLRN: 077-043-759-782-3)
For more information, see:
ELRA-S0183 OrienTel Morocco MCA (Modern Colloquial Arabic) database (ISLRN: 613-578-868-832-2)
For more information, see:
ELRA-S0184 OrienTel Morocco MSA (Modern Standard Arabic) database (ISLRN: 978-839-138-181-8)
For more information, see:
ELRA-S0185 OrienTel French as spoken in Morocco database (ISLRN: 299-422-451-969-8)
For more information, see:
ELRA-S0186 OrienTel Tunisia MCA (Modern Colloquial Arabic) database (ISLRN: 297-705-745-294-4)
For more information, see:
ELRA-S0187 OrienTel Tunisia MSA (Modern Standard Arabic) database (ISLRN: 926-401-827-806-5)
For more information, see:
ELRA-S0188 OrienTel French as spoken in Tunisia database (ISLRN:
For more information, see:
ELRA-S0207 LC-STAR Catalan phonetic lexicon (ISLRN: 102-856-174-704-7)
For more information, see:
ELRA-S0208 LC-STAR Spanish phonetic lexicon (ISLRN: 826-939-678-247-5)
For more information, see:
ELRA-S0243 SpeechDat Catalan FDB database (ISLRN:
For more information, see:
ELRA-S0306 TC-STAR Transcriptions of Spanish Parliamentary Speech (ISLRN: 972-398-693-247-4 )
For more information, see:
ELRA-S0309 TC-STAR Spanish Baseline Female Speech Database (ISLRN: 682-113-241-701-0)
For more information, see:
ELRA-S0310 TC-STAR Spanish Baseline Male Speech Database (ISLRN: 736-021-086-598-0)
For more information, see:
ELRA-S0311 TC-STAR Bilingual Voice-Conversion Spanish Speech Database (ISLRN: 254-311-004-570-0)
For more information, see:
ELRA-S0312 TC-STAR Bilingual Voice-Conversion English Speech Database (ISLRN: 522-613-023-181-1)
For more information, see:
ELRA-S0313 TC-STAR Bilingual Expressive Speech Database (ISLRN:
For more information, see:
ELRA-S0336 Spanish Festival voice male (ISLRN: 868-352-143-949-9)
For more information, see:
ELRA-S0337 Spanish Festival voice female (ISLRN: 396-262-481-019-0)
For more information, see:

For more information on the catalogue, please contact Valérie Mapelli

If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us.

Visit our On-line Catalogue:
Visit the Universal Catalogue:
Archives of ELRA Language Resources Catalogue Updates:


5-2-26USC 75-Speaker Speech MRI Database. Multispeaker speech production articulatory datasets of vocal tract MRI video

A USC Multispeaker speech production articulatory datasets of vocal tract MRI video
 the 'USC 75-Speaker Speech MRI Database'  is a new freely-available speech production data set with accompanying software  tools:

These data contain 2D sagittal-view RT-MRI videos along with synchronized  audio for 75 subjects performing linguistically motivated speech tasks,  alongside the corresponding raw RT-MRI data. The dataset also includes
3D volumetric vocal tract MRI during sustained speech sounds and  high-resolution static anatomical T2-weighted upper airway MRI for each  subject. 

Other speech production datasets of articulatory data that are also freely available  include  a TIMIT articulatory data corpus and emotional speech production data, all available from:


5-2-27LR Agreement between ELRA and Datatang

Paris, France, October 24, 2022/

*LR Agreement with Datatang*


ELRA and Datatang signed a Language Resources distribution agreement to release a total
of 67 Speech Resources distributed by ELRA. With this agreement, ELRA is strengthening
its position as the leading worldwide distribution centre and Datatang is getting more
visibility on the European market.

Those resources were designed and collected to boost Speech Recognition in particular.
They cover the following languages:

 * Cantonese,
 * Chinese Mandarin,
 * Various dialects from China: Changsha, Kunming, Shanghai, Sichuan,
 * Several variants of English (English from Australia, Canada, China,
   France, Germany, India, Italy, Japan, Korea, Latin America,
   Portugal, Russia, Singapore, Spain, United Kingdom, USA),
 * French,
 * German,
 * Hindi,
 * Indonesian,
 * Italian,
 * Japanese,
 * Korean,
 * Malay,
 * Portuguese (Brazilian),
 * Russian,
 * Spanish (including non-hispanic Spanish),
 * Thai,
 * Uyghur,
 * Vietnamese.

The detailed list of all 67 Language Resources from Datatang can be found here
<>. *About

Founded in 2011, Datatang (Stock code: 831428) is a global AI data asset and data
solution provider. Datatang offers solutions for R&D needs with over 500 prepared AI
datasets covering ASR, TTS, CV and NLP. Relying on own data resources, technical
advantages and intensive data processing experiences, Datatang provides data services to
1,000+ companies and institutions worldwide.

To find out more about Datatang, please visit the website:

*About ELRA*

The ELRA Language Resources Association (ELRA) is a non-profit-making organisation
founded by the European Commission in 1995, with the mission of providing a clearing
house for Language Resources and promoting Human Language Technologies. Language
Resources covering various fields of HLT (including Multimodal, Speech, Written,
Terminology) and a great number of languages are available from the ELRA catalogue.
ELRA's strong involvement in the fields of Language Resources and Language Technologies
is also emphasized at the LREC conference, organized every other year since 1998.

To find out more about ELRA, please visit the website:

For more information on the catalogue or if you would like to enquire about having your
resources distributed by ELRA, please contact us <>.


 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA