ISCA - International Speech
Communication Association


ISCApad Archive  »  2019  »  ISCApad #253  »  Resources  »  Database

ISCApad #253

Tuesday, July 09, 2019 by Chris Wellekens

5-2 Database
5-2-1Linguistic Data Consortium (LDC) update (June 2019)

 

In this newsletter:

 

New Publications:

 

DEFT Spanish Committed Belief Annotation

 

USC-SFI MALACH Interviews and Transcripts English – Speech Recognition Edition

 

 


New publications:

 

 

 

(1) DEFT Spanish Committed Belief Annotation was developed by LDC and consists of approximately 67,000 tokens of Spanish discussion forum text annotated for 'committed belief,' which marks the level of commitment displayed by the author to the truth of the propositions expressed in the text.

 

DARPA's Deep Exploration and Filtering of Text (DEFT) program aimed to address remaining capability gaps in state-of-the-art natural language processing technologies related to inference, causal relationships, and anomaly detection. LDC supported the DEFT program by collecting, creating, and annotating a variety of data sources.

 

DEFT Spanish Committed Belief Annotation is distributed via web download.

 

2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $1000.

 

*

 

(2) USC-SFI MALACH Interviews and Transcripts English – Speech Recognition Edition was developed by IBM as part of the MALACH (Multilingual Access to Large Spoken ArCHives) Project and contains approximately 168 hours of interviews from 682 Holocaust witnesses along with transcripts, a lexicon and other documentation. This release augments USC-SFI MALACH Interviews and Transcripts English (LDC2012S05) by modifying and updating a subset of the original corpus for use with speech recognition systems, such as the Kaldi toolkit.

 

Specifically, the audio data has been converted from unsegmented mpeg files to a segmented flac compressed format. The speaker-turn, time-stamped transcripts have been updated to an utterance-by-utterance format. A lexicon mapping words to phonemes is provided, and the data is divided into development and training sets.

 

The goal of the MALACH project was to develop methods for improved access to large multinational spoken archives in order to advance the state of the art of automatic speech recognition and information retrieval. The characteristics of the USC-SFI collection -- unconstrained, natural speech filled with disfluencies, heavy accents, age-related coarticulations, un-cued speaker and language switching, and emotional speech -- were considered well-suited for that task.

 

USC-SFI MALACH Interviews and Transcripts English – Speech Recognition Edition is distributed via web download.

 

2019 Subscription Members will automatically receive copies of this corpus provided they have submitted a completed copy of the special license agreement. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data at no cost.

 

*

 

(3) First DIHARD Challenge Development - Eight Sources was developed by LDC and contains approximately 17 hours of English and Chinese speech data along with corresponding annotations used in support of the First DIHARD Challenge. This release, when combined with First DIHARD Challenge Development - SEEDLingS (LDC2019S10), contains the development set audio data and annotation (diarization, segmentation) as well as the official scoring tool.

 

The First DIHARD Challenge was an attempt to reinvigorate work on diarization through a shared task focusing on 'hard' diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As such, it included speech from a wide sampling of domains representing diversity in number of speakers, speaker demographics, interaction style, recording quality, and environmental conditions as follows (all sources are in English unless otherwise indicated):

 

  • Autism Diagnostic Observation Schedule (ADOS) interviews
  • DCIEM/HCRC map task (LDC96S38)
  • Audiobook recordings from LibriVox
  • Meeting speech from 2004 Spring NIST Rich Transcription (RT-04S) Development (LDC2007S11) and Evaluation (LDC2007S12) releases.
  • 2001 U.S. Supreme Court oral arguments
  • Sociolinguistic interviews from SLX Corpus of Classic Sociolinguistic Interviews (LDC2003T15)
  • Chinese video collected by LDC as part of the Video Annotation for Speech Technologies (VAST) project
  • YouthPoint radio interviews

 

 

 

First DIHARD Challenge Development - Eight Sources is distributed via web download.

 

2019 Subscription Members will automatically receive copies of this corpus.  2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $300.

 

*

 

(4) First DIHARD Challenge Development - SEEDLingS was developed by Duke University and LDC and contains approximately two hours of English child language recordings along with corresponding annotations used in support of the First DIHARD Challenge. This release, when combined with First DIHARD Challenge Development - Eight Sources (LDC2019S09), contains the development set audio data and annotation (diarization, segmentation) as well as the official scoring tool.

 

The source data was drawn from the SEEDLingS (The Study of Environmental Effects on Developing Linguistic Skills) corpus, designed to investigate how infants' early linguistic and environmental input plays a role in their learning. Recordings for SEEDLingS were generated in the home environment of 44 infants from 6-18 months of age in the Rochester, New York, area. A subset of that data was annotated by LDC for use in the First DIHARD Challenge.

 

The First DIHARD Challenge was an attempt to reinvigorate work on diarization through a shared task focusing on 'hard' diarization; that is, speech diarization for challenging corpora where there was an expectation that existing state-of-the-art systems would fare poorly. As such, it included speech from a wide sampling of domains representing diversity in number of speakers, speaker demographics, interaction style, recording quality, and environmental conditions.

 

First DIHARD Challenge Development – SEEDLingS is distributed via web download.

 

2019 Subscription Members will receive copies of this corpus provided they have submitted a completed copy of the special license agreement. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $50.

 

*

 

 

 

Membership Office

 

Linguistic Data Consortium

 

University of Pennsylvania

 

T: +1-215-573-1275

 

E: ldc@ldc.upenn.edu

 

M: 3600 Market St. Suite 810

 

      Philadelphia, PA 19104

 

 

 

 

 

Back  Top

5-2-2ELRA - Language Resources Catalogue - Update (July 2019)
We are happy to announce that 2 new Speech resources and 3 new Terminological Resources are now available in our catalogue.

ELRA-S0406 Glissando-sp
ISLRN:
024-286-962-247-6
Glissando-sp includes more than 12 hours of speech in Spanish, recorded under optimal acoustic conditions, orthographically transcribed, phonetically aligned and annotated with prosodic information (location of the stressed syllables and prosodic phrasing). The corpus was recorded by 8 professional speakers and 20 non-professional speakers: 4 ?news broadcaster? professional speakers (2 male and 2 female), 4 ?advertising? professional speakers (2 male and 2 female), and 20 non-professional speakers (10 male and 10 female). Glissando-sp is made of three subcorpora: readings of real news texts (provided by ?Cadena Ser? radio station), interactions between two speakers oriented to a specific goal in the domain of information requests, and conversations between people who have some degree of familiarity with each other.
For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0406/

ELRA-S0407 Glissando-ca
ISLRN: 780-617-066-913-1
Glissando-ca includes more than 12 hours of speech in Catalan, recorded under optimal acoustic conditions, orthographically transcribed, phonetically aligned and annotated with prosodic information (location of the stressed syllables and prosodic phrasing). The corpus was recorded by 8 professional speakers and 20 non-professional speakers: 4 ?news broadcaster? professional speakers (2 male and 2 female), 4 ?advertising? professional speakers (2 male and 2 female), and 20 non-professional speakers (10 male and 10 female). Glissando-ca is made of three subcorpora: readings of real news texts (provided by ?Cadena Ser? radio station), interactions between two speakers oriented to a specific goal in the domain of information requests, and conversations between people who have some degree of familiarity with each other.
For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0407/

ELRA-T0378 English-Persian database of idioms and expressions

ISLRN: 387-435-142-983-6
This database consists of about 30,000 bilingual parallel sentences and phrases in English and Persian (15,000 in each language). It comes with a software through which the users can search a word, phrase or chunk and receive all idioms and expressions related to the query. The database is presented in Access format and the software is executable on Windows systems.
For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-T0378/

ELRA-T0379 English-Persian terminology database of computer and IT

ISLRN: 760-940-374-770-6
This bilingual terminology consists of around 25,000 terms in the field of computer engineering, computer sciences and information technology. It comes with a software through which the users can search a word, phrase or chunk and receive all entries related to the query. The database is presented in Access format and the software is executable on Windows systems.
For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-T0379/

ELRA-T0380 English-Persian terminology database of management and economics
ISLRN: 188-448-142-468-5
This bilingual terminology consists of around 15,000 terms in the field of management and economics sciences. It comes with a software through which the users can search a word, phrase or chunk and receive all entries related to the query. The main database of the software is presented in Access format and the software itself is executable on Windows systems.
For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-T0380/


For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org

If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us.

Visit our On-line Catalogue: http://catalog.elra.info
Visit the Universal Catalogue: http://universal.elra.info
Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements/


















Back  Top

5-2-3Speechocean – update (June 2019)

 

TTS Corpora Overview --- Speechocean

 

Speechocean: AI Data Resource and Data Service Provider 

 

Till now there are 30+ TTS corpora have been launched, including 13 languages. Please check the form below:

 

Serial Number

Language

Gender

Hours

King-TTS-003-1

Mandarin

Female

9.1

King-TTS-003-2

Mandarin

Female

5.96

King-TTS-003

Mandarin

Female

15.06

King-TTS-004

Arabic

Male

11.74

King-TTS-005

Arabic

Male

12.01

King-TTS-006

British English

Female

12.7

King-TTS-007

British English

Male

10.5

King-TTS-008

Spanish

Female

10.44

King-TTS-010

French

Female

10.69

King-TTS-011-1

Mandarin

Female

17.2

King-TTS-011

Mandarin

Female

29.08

King-TTS-013

American English

Female

10.56

King-TTS-014

American English

Male

12.18

King-TTS-015

Italian

Female

9.81

King-TTS-016

Italian

Male

9.79

King-TTS-017

Portuguese

Female

13.44

King-TTS-019

Russian

Female

15.88

King-TTS-020

Russian

Male

13.69

King-TTS-024

Japanese

Male

8.1

King-TTS-025

Mandarin

Male

17.46

King-TTS-026

Hongkong Cantonese

Female

9.37

King-TTS-027

Hongkong Cantonese

Male

9.56

King-TTS-030

British English

Female

12.6

King-TTS-031

Mandarin

Female

45.8

King-TTS-033

American English

Female

19.86

King-TTS-035

American English

Female

10.75

King-TTS-036

Taiwanese Mandarin

Female

27.5

King-TTS-037

Taiwanese Mandarin

Male

29.55

King-TTS-039

American English

Female

13.4

King-TTS-040

Mandarin

Female

1.15

King-TTS-042

Mandarin

Female

34.43

King-TTS-028

Korean

Male

10.9

King-TTS-034

Korean

Female

10.8

 

If you have any further inquiries, please do not hesitate to contact us.

Web: www.speechocean.com

Email: contact@speechocean.com

 

 

 

 


 


 

 

Back  Top

5-2-4Google 's Language Model benchmark
 Here is a brief description of the project.

'The purpose of the project is to make available a standard training and test setup for language modeling experiments.

The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here.

This also means that your results on this data set are reproducible by the research community at large.

Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:

  • unpruned Katz (1.1B n-grams),
  • pruned Katz (~15M n-grams),
  • unpruned Interpolated Kneser-Ney (1.1B n-grams),
  • pruned Interpolated Kneser-Ney (~15M n-grams)

 

Happy benchmarking!'

Back  Top

5-2-5Forensic database of voice recordings of 500+ Australian English speakers

Forensic database of voice recordings of 500+ Australian English speakers

We are pleased to announce that the forensic database of voice recordings of 500+ Australian English speakers is now published.

The database was collected by the Forensic Voice Comparison Laboratory, School of Electrical Engineering & Telecommunications, University of New South Wales as part of the Australian Research Council funded Linkage Project on making demonstrably valid and reliable forensic voice comparison a practical everyday reality in Australia. The project was conducted in partnership with: Australian Federal Police,  New South Wales Police,  Queensland Police, National Institute of Forensic Sciences, Australasian Speech Sciences and Technology Association, Guardia Civil, Universidad Autónoma de Madrid.

The database includes multiple non-contemporaneous recordings of most speakers. Each speaker is recorded in three different speaking styles representative of some common styles found in forensic casework. Recordings are recorded under high-quality conditions and extraneous noises and crosstalk have been manually removed. The high-quality audio can be processed to reflect recording conditions found in forensic casework.

The database can be accessed at: http://databases.forensic-voice-comparison.net/

Back  Top

5-2-6Audio and Electroglottographic speech recordings

 

Audio and Electroglottographic speech recordings from several languages

We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'.

http://www.phonetics.ucla.edu/voiceproject/voice.html

Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project.

Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets.

All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License.

This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College.

Pat Keating (UCLA)

Back  Top

5-2-7EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada

We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.

 

In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.

 

Data from 14 participants are available here: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.

 

If you have any questions, please contact Frank Rudzicz at frank@cs.toronto.edu.

 

Best regards,

Frank

 

 

PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia

ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.

 

Back  Top

5-2-8TORGO data base free for academic use.

In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here: http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html.

Back  Top

5-2-9Datatang

Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.

 

Summary of the new datasets (2018) and a brief plan for 2019.

 

 

 

? Speech data (with annotation) that we finished in 2018 

 

Language
Datasets Length
  ( Hours )
French
794
British English
800
Spanish
435
Italian
1,440
German
1,800
Spanish (Mexico/Colombia)
700
Brazilian Portuguese
1,000
European Portuguese
1,000
Russian
1,000

 

?2019 ongoing  speech project 

 

Type

Project Name

Europeans speak English

1000 Hours-Spanish Speak English

1000 Hours-French Speak English

1000 Hours-German Speak English

Call Center Speech

1000 Hours-Call Center Speech

off-the-shelf data expansion

1000 Hours-Chinese Speak English

1500 Hours-Mixed Chinese and English Speech Data

 

 

 

On top of the above,  there are more planed speech data collections, such as Japanese speech data, children`s speech data, dialect speech data and so on.  

 

What is more, we will continually provide those data at a competitive price with a maintained high accuracy rate.

 

 

 

If you have any questions or need more details, do not hesitate to contact us jessy@datatang.com 

 

It would be possible to send you with a sample or specification of the data.

 

 

 


Back  Top

5-2-10Fearless Steps Corpus (University of Texas, Dallas)

Fearless Steps Corpus

John H.L. Hansen, Abhijeet Sangwan, Lakshmish Kaushik, Chengzhu Yu Center for Robust Speech Systems (CRSS), Eric Jonsson School of Engineering, The University of Texas at Dallas (UTD), Richardson, Texas, U.S.A.


NASA’s Apollo program is a great achievement of mankind in the 20th century. CRSS, UT-Dallas has undertaken an enormous Apollo data digitization initiative where we proposed to digitize Apollo mission speech data (~100,000 hours) and develop Spoken Language Technology based algorithms to analyze and understand various aspects of conversational speech. Towards achieving this goal, a new 30 track analog audio decoder is designed to decode 30 track Apollo analog tapes and is mounted on to the NASA Soundscriber analog audio decoder (in place of single channel decoder). Using the new decoder all 30 channels of data can be decoded simultaneously thereby reducing the digitization time significantly. 
We have digitized 19,000 hours of data from Apollo missions (including entire Apollo-11, most of Apollo-13, Apollo-1, and Gemini-8 missions). This audio archive is named as “Fearless Steps Corpus”. This is one of the most unique and singularly large naturalistic audio corpus of such magnitude. Automated transcripts are generated by building Apollo mission specific custom Deep Neural Networks (DNN) based Automatic Speech Recognition (ASR) system along with Apollo mission specific language models. Speaker Identification System (SID) to identify the speakers are designed. A complete diarization pipeline is established to study and develop various SLT tasks. 
We will release this corpus for public usage as a part of public outreach and promote SLT community to utilize this opportunity to build naturalistic spoken language technology systems. The data provides ample opportunity setup challenging tasks in various SLT areas. As a part of this outreach we will be setting “Fearless Challenge” in the upcoming INTERSPEECH 2018. We will define and propose 5 tasks as a part of this challenge. The guidelines and challenge data will be released in the Spring 2018 and will be available for download for free. The five challenges are, (1) Automatic Speech Recognition (2) Speaker Identification (3) Speech Activity Detection (4) Speaker Diarization (5) Keyword spotting and Joint Topic/Sentiment detection.
Looking forward for your participation (John.Hansen@utdallas.edu) 

Back  Top

5-2-11SIWIS French Speech Synthesis Database
The SIWIS French Speech Synthesis Database includes high quality French speech recordings and associated text files, aimed at building TTS systems, investigate multiple styles, and emphasis. A total of 9750 utterances from various sources such as parliament debates and novels were uttered by a professional French voice talent. A subset of the database contains emphasised words in many different contexts. The database includes more than ten hours of speech data and is freely available.
 
Back  Top

5-2-12JLCorpus - Emotional Speech corpus with primary and secondary emotions
JLCorpus - Emotional Speech corpus with primary and secondary emotions:
 

For further understanding the wide array of emotions embedded in human speech, we are introducing an emotional speech corpus. In contrast to the existing speech corpora, this corpus was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions along with 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI), where the aim is to model natural conversations among humans and robots. But there are very few existing speech resources to study these emotions,and this work adds a speech corpus containing some secondary emotions.

Please use the corpus for emotional speech related studies. When you use it please include the citation as:

Jesin James, Li Tian, Catherine Watson, 'An Open Source Emotional Speech Corpus for Human Robot Interaction Applications', in Proc. Interspeech, 2018.

To access the whole corpus including the recording supporting files, click the following link: https://www.kaggle.com/tli725/jl-corpus, (if you have already installed the Kaggle API, you can type the following command to download: kaggle datasets download -d tli725/jl-corpus)

Or if you simply want the raw audio+txt files, click the following link: https://www.kaggle.com/tli725/jl-corpus/downloads/Raw%20JL%20corpus%20(unchecked%20and%20unannotated).rar/4

The corpus was evaluated by a large scale human perception test with 120 participants. The link to the survey are here- For Primary emorion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_8ewmOCgOFCHpAj3

For Secondary emotion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_eVDINp8WkKpsPsh

These surveys will give an overall idea about the type of recordings in the corpus.

The perceptually verified and annotated JL corpus will be given public access soon.

Back  Top

5-2-13OPENGLOT –An open environment for the evaluation of glottal inverse filtering

OPENGLOT –An open environment for the evaluation of glottal inverse filtering

 

OPENGLOT is a publically available database that was designed primarily for the evaluation of glottal inverse filtering algorithms. In addition, the database can be used in evaluating formant estimation methods. OPENGLOT consists of four repositories. Repository I contains synthetic glottal flow waveforms, and speech signals generated by using the Liljencrants–Fant (LF) waveform as an excitation, and an all-pole vocal tract model. Repository II contains glottal flow and speech pressure signals generated using physical modelling of human speech production. Repository III contains pairs of glottal excitation and speech pressure signal generated by exciting 3D printed plastic vocal tract replica with LF excitations via a loudspeaker. Finally, Repository IV contains multichannel recordings (speech pressure signal, EGG, high-speed video of the vocal folds) from natural production of speech.

 

OPENGLOT is available at:

http://research.spa.aalto.fi/projects/openglot/

Back  Top

5-2-14Corpus Rhapsodie

Nous sommes heureux de vous annoncer la publication d¹un ouvrage consacré
au treebank Rhapsodie, un corpus de français parlé de 33 000 mots
finement annoté en prosodie et en syntaxe.

Accès à la publication : https://benjamins.com/catalog/scl.89 (voir flyer
ci-joint)

Accès au treebank : https://www.projet-rhapsodie.fr/
Les données librement accessibles sont diffusées sous licence Creative
Commons.
Le site donne également accès aux guides d¹annotations.

Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA