ISCApad #264 |
Wednesday, June 10, 2020 by Chris Wellekens |
5-2-1 | Linguistic Data Consortium (LDC) update (May 2020)
In this newsletter:
New publications:
*
(2) LORELEI Entity Detection and Linking Knowledge Base was developed by LDC and contains the full LORELEI Entity Detection and Linking (EDL) Knowledge Base (KB) used for all LORELEI Representative Language and Incident Language Pack entity linking annotation. The LORELEI (Low Resource Languages for Emergent Incidents) Program was concerned with building human language technology for low resource languages in the context of emergent situations like natural disasters or disease outbreaks.
The KB in this release supported the EDL task in LORELEI for four entity types -- geo-political entities (GPE), locations (LOC), persons (PER), and organizations (ORG) -- and contains a total of 10,216,832 entities. There are four inputs to the KB, each designated by a unique 'origin' code in the KB, as follows: GPE and LOC entities from a snapshot of GeoNames, PER entities from the CIA World Leaders List, ORG entities from Appendix B of the CIA World Factbook, and additional entities manually created by LDC for each of the representative and incident languages in the LORELEI Program.
2020 Subscription Members will automatically receive copies of this corpus. 2020 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $250.
*
(3) BOLT English Translation Treebank - Chinese Discussion Forum was developed by LDC and consists of 147,432 tokens of web discussion forum data translated from Chinese to English and annotated for part-of-speech and syntactic structure.
The source data is Chinese discussion forum web text collected by LDC in 2011 and 2012, translated into English and released in BOLT Chinese Discussion Forum Parallel Training Data (LDC2017T05). A subset of the translated text -- 148 files representing 147,432 tokens -- was selected for the treebank and annotated for word-level tokenization, part-of-speech, and syntactic structure. Only the translated English text is included in the source data for this release.
Part-of-speech and treebank annotation conformed to Penn Treebank II style, incorporating changes to those guidelines that were developed under the GALE (Global Autonomous Language Exploitation) program. Supplementary guidelines for English treebanks and web text are included with this release.
BOLT English Translation Treebank - Chinese Discussion Forum is distributed via web download.
2020 Subscription Members will automatically receive copies of this corpus. 2020 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $1750.
*
(4) Multi-Language Conversational Telephone Speech 2011 -- Mandarin Chinese was developed by LDC and is comprised of approximately 25 hours of telephone speech in Mandarin Chinese.
The data were collected primarily to support research and technology evaluation in automatic language identification, and portions of these telephone calls were used in the NIST 2011 Language Recognition Evaluation (LRE). Participants were recruited by native speakers who contacted acquaintances in their social network. Those native speakers made one call, up to 15 minutes, to each acquaintance. The data was collected using LDC's telephone collection infrastructure, comprised of three computer telephony systems. Human auditors labeled calls for callee gender, dialect type, and noise.
LDC has also released the following as part of the Multi-Language Conversational Telephone Speech 2011 series:
Multi-Language Conversational Telephone Speech 2011 -- Mandarin Chinese is distributed via web download. 2020 Subscription Members will automatically receive copies of this corpus. 2020 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $1500.
*
University of Pennsylvania
T: +1-215-573-1275
E: ldc@ldc.upenn.edu
M: 3600 Market St. Suite 810
Philadelphia, PA 19104
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-2 | ELRA - Language Resources Catalogue - Update (May 2020) We are happy to announce that 1 new Speech resource is now available in our catalogue.
This dataset consists of 4.98 hours of transcribed conversational speech in Mandarin Chinese, where 30 conversations are uttered by 32 speakers (16 males and 16 females). The audios are sampled at 16 kHz and quantized at 16 bits.
For each conversation, there are two close-talking channels recorded via the microphones, one for each speaker, as well as three far-field channels recorded by iPhone, Androïd Phone, and recorder respectively. This corpus may be obtained as a complete set or by selecting specific channels (two close-talking channels shall be understood as 1 single channel):
ELRA-S0409-01 MDT Mandarin Chinese Conversational Recognition Corpus - complete set
ISLRN: 559-956-475-937-1 For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0409_01 ELRA-S0409-02 MDT Mandarin Chinese Conversational Recognition Corpus - 1 channel ISLRN: 234-140-315-272-4 For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0409_02 ELRA-S0409-03 MDT Mandarin Chinese Conversational Recognition Corpus - 2 channels
ISLRN: 383-054-806-637-3 For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0409_03 ELRA-S0409-04 MDT Mandarin Chinese Conversational Recognition Corpus - 3 channels
ISLRN: 235-882-638-211-2 For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0409_04 For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org
If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us. Visit the Universal Catalogue: http://universal.elra.info Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-3 | Speechocean – update (August 2019)
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-4 | Google 's Language Model benchmark A LM benchmark is available at:https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark
Here is a brief description of the project.
'The purpose of the project is to make available a standard training and test setup for language modeling experiments. The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here. This also means that your results on this data set are reproducible by the research community at large. Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:
ArXiv paper: http://arxiv.org/abs/1312.3005
Happy benchmarking!'
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-5 | Forensic database of voice recordings of 500+ Australian English speakers Forensic database of voice recordings of 500+ Australian English speakers
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-6 | Audio and Electroglottographic speech recordings
Audio and Electroglottographic speech recordings from several languages We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'. http://www.phonetics.ucla.edu/voiceproject/voice.html Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project. Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets. All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License. This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College. Pat Keating (UCLA)
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-7 | EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.
In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.
Data from 14 participants are available here: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.
If you have any questions, please contact Frank Rudzicz at frank@cs.toronto.edu.
Best regards, Frank
PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-8 | TORGO data base free for academic use. In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here: http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-9 | Datatang Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.
Summary of the new datasets (2018) and a brief plan for 2019.
? Speech data (with annotation) that we finished in 2018
?2019 ongoing speech project
On top of the above, there are more planed speech data collections, such as Japanese speech data, children`s speech data, dialect speech data and so on.
What is more, we will continually provide those data at a competitive price with a maintained high accuracy rate.
If you have any questions or need more details, do not hesitate to contact us jessy@datatang.com
It would be possible to send you with a sample or specification of the data.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-10 | Fearless Steps Corpus (University of Texas, Dallas) Fearless Steps Corpus John H.L. Hansen, Abhijeet Sangwan, Lakshmish Kaushik, Chengzhu Yu Center for Robust Speech Systems (CRSS), Eric Jonsson School of Engineering, The University of Texas at Dallas (UTD), Richardson, Texas, U.S.A.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-11 | SIWIS French Speech Synthesis Database The SIWIS French Speech Synthesis Database includes high quality French speech recordings and associated text files, aimed at building TTS systems, investigate multiple styles, and emphasis. A total of 9750 utterances from various sources such as parliament debates and novels were uttered by a professional French voice talent. A subset of the database contains emphasised words in many different contexts. The database includes more than ten hours of speech data and is freely available.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-12 | JLCorpus - Emotional Speech corpus with primary and secondary emotions JLCorpus - Emotional Speech corpus with primary and secondary emotions:
For further understanding the wide array of emotions embedded in human speech, we are introducing an emotional speech corpus. In contrast to the existing speech corpora, this corpus was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions along with 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI), where the aim is to model natural conversations among humans and robots. But there are very few existing speech resources to study these emotions,and this work adds a speech corpus containing some secondary emotions. Please use the corpus for emotional speech related studies. When you use it please include the citation as: Jesin James, Li Tian, Catherine Watson, 'An Open Source Emotional Speech Corpus for Human Robot Interaction Applications', in Proc. Interspeech, 2018. To access the whole corpus including the recording supporting files, click the following link: https://www.kaggle.com/tli725/jl-corpus, (if you have already installed the Kaggle API, you can type the following command to download: kaggle datasets download -d tli725/jl-corpus) Or if you simply want the raw audio+txt files, click the following link: https://www.kaggle.com/tli725/jl-corpus/downloads/Raw%20JL%20corpus%20(unchecked%20and%20unannotated).rar/4 The corpus was evaluated by a large scale human perception test with 120 participants. The link to the survey are here- For Primary emorion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_8ewmOCgOFCHpAj3 For Secondary emotion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_eVDINp8WkKpsPsh These surveys will give an overall idea about the type of recordings in the corpus. The perceptually verified and annotated JL corpus will be given public access soon.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-13 | OPENGLOT –An open environment for the evaluation of glottal inverse filtering OPENGLOT –An open environment for the evaluation of glottal inverse filtering
OPENGLOT is a publically available database that was designed primarily for the evaluation of glottal inverse filtering algorithms. In addition, the database can be used in evaluating formant estimation methods. OPENGLOT consists of four repositories. Repository I contains synthetic glottal flow waveforms, and speech signals generated by using the Liljencrants–Fant (LF) waveform as an excitation, and an all-pole vocal tract model. Repository II contains glottal flow and speech pressure signals generated using physical modelling of human speech production. Repository III contains pairs of glottal excitation and speech pressure signal generated by exciting 3D printed plastic vocal tract replica with LF excitations via a loudspeaker. Finally, Repository IV contains multichannel recordings (speech pressure signal, EGG, high-speed video of the vocal folds) from natural production of speech.
OPENGLOT is available at: http://research.spa.aalto.fi/projects/openglot/
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-14 | Corpus Rhapsodie Nous sommes heureux de vous annoncer la publication d¹un ouvrage consacré
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-15 | The My Science Tutor Children?s Conversational Speech Corpus (MyST Corpus) , Boulder Learning Inc. The My Science Tutor Children?s Conversational Speech Corpus (MyST Corpus) is the world?s largest English children?s speech corpus. It is freely available to the research community for research use. Companies can acquire the corpus for $10,000. The MyST Corpus was collected over a 10-year period, with support from over $9 million in grants from the US National Science Foundation and Department of Education, awarded to Boulder Learning Inc. (Wayne Ward, Principal Investigator). The MyST corpus contains speech collected from 1,374 third, fourth and fifth grade students. The students engaged in spoken dialogs with a virtual science tutor in 8 areas of science. A total of 11,398 student sessions of 15 to 20 minutes produced a total of 244,069 utterances. 42% of the utterances have been transcribed at the word level. The corpus is partitioned into training and test sets to support comparison of research results across labs. All parents and students signed consent forms, approved by the University of Colorado?s Institutional Review Board, that authorize distribution of the corpus for research and commercial use. The MyST children?s speech corpus contains approximately ten times as many spoken utterances as all other English children?s speech corpora combined (see https://en.wikipedia.org/wiki/List_of_children%27s_speech_corpora). Additional information about the corpus, and instructions for how to acquire the corpus (and samples of the speech data) can be found on the Boulder Learning Web site at http://boulderlearning.com/request-the-myst-corpus/.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-16 | HARVARD speech corpus - native British English speaker
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-17 | Magic Data Technology Kid Voice TTS Corpus in Mandarin Chinese (November 2019) Magic Data Technology Kid Voice TTS Corpus in Mandarin Chinese
Magic Data Technology is one of the leading artificial intelligence data service providers in the world. The company is committed to providing a wild range of customized data services in the fields of speech recognition, intelligent imaging and Natural Language Understanding.
This corpus was recorded by a four-year-old Chinese girl originally born in Beijing China. This time we published 15-minute speech data from the corpus for non-commercial use.
The contents and the corresponding descriptions of the corpus:
The corpus aims to help researchers in the TTS fields. And it is part of a much bigger dataset (2.3 hours MAGICDATA Kid Voice TTS Corpus in Mandarin Chinese) which was recorded in the same environment. This is the first time to publish this voice!
Please note that this corpus has got the speaker and her parents’ authorization.
Samples are available. Do not hesitate to contact us for any questions. Website: http://www.imagicdatatech.com/index.php/home/dataopensource/data_info/id/360 E-mail: business@magicdatatech.com
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-18 | FlauBERT: a French LM Here is FlauBERT: a French LM learnt (with #CNRS J-Zay supercomputer) on a large and heterogeneous corpus. Along with it comes FLUE (evaluation setup for French NLP). FlauBERT was successfully applied to complex tasks (NLI, WSD, Parsing). More on https://github.com/getalp/Flaubert
More details on this online paper: https://arxiv.org/abs/1912.05372
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-19 | ELRA-S0408 Speechtera Pronunciation Dictionary ELRA-S0408 Speechtera Pronunciation Dictionary ISLRN: 645-563-102-594-8
The SpeechTera Pronunciation Dictionary is a machine-readable pronunciation dictionary for Brazilian Portuguese and comprises 737,347 entries. Its phonetic transcription is based on 13 linguistics varieties spoken in Brazil and contains the pronunciation of the frequent word forms found in the transcription data of the SpeechTera's speech and text database (literary, newspaper, movies, miscellaneous). Each one of the thirteen dialects comprises 56,719 entries. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0408/ For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us. Visit our On-line Catalogue: http://catalog.elra.info
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-20 | Ressources of ELRC Network Paris, France, April 23, 2020 ELRA is happy to announce that Language Resources collected within the ELRC Network, funded by the European Commission, are now available from the ELRA Catalogue of Language Resources. For more information on the catalogue, please contact Valérie Mapelli
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-21 | Language Resources distribution agreement between ELRA and SpeechOcean Press Release - Immediate ELRA and SpeechOcean signed a new Language Resources distribution agreement. On behalf of ELRA, ELDA acts as the distribution agency for SpeechOcean since 2007 and incorporated 46 new speech resources to the ELRA Catalogue of Language Resources catalogue. Those resources were designed and collected to boost Speech Recognition. They cover the following languages:
To find out more about SpeechOcean, please visit the website: http://www.speechocean.com To find out more about ELRA, please visit the website: http://www.elra.info
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-22 | Sharing Language Ressourses via ELRA ELRA recognises the importance of sharing Language Resources (LRs) and making them available to the community. Since the 2014 edition of LREC, the Language Resources and Evaluation Conference, participants have been offered the possibility to share their LRs (data, tools, web-services, etc.) when submitting a paper, uploading them in a special LREC repository set up by ELRA. This effort of sharing LRs, linked to the LRE Map initiative (https://lremap.elra.info) for their description, contributes to creating a common repository where everyone can deposit and share data. The LREC initiative 'Share your LRs' was launched in 2014 in Reykjavik. It was successfully continued in 2016 in Portoro? and 2018 in Miyazaki. Corresponding repositories are available here:
For more information and/or questions, please write to contact@elda.org.
|