ISCApad #251 |
Sunday, May 12, 2019 by Chris Wellekens |
5-2-1 | Linguistic Data Consortium (LDC) update (April 2019) In this newsletter: New Publications: BOLT Egyptian-English Word Alignment -- Discussion Forum Training Chinese Abstract Meaning Representation 1.0 HAVIC MED Progress Test -- Videos, Metadata and Annotation
LDC at ICASSP 2019 LDC will post conference updates via our Twitter feed and Facebook page. We hope to see you there! LDC data and commercial technology development For-profit organizations are reminded that an LDC membership is a pre-requisite for obtaining a commercial license to almost all LDC databases. Non-member organizations, including non-member for-profit organizations, cannot use LDC data to develop or test products for commercialization, nor can they use LDC data in any commercial product or for any commercial purpose. LDC data users should consult corpus-specific license agreements for limitations on the use of certain corpora. Visit the Licensing page for further information.
(1) BOLT Egyptian-English Word Alignment -- Discussion Forum Training was developed by LDC and consists of 400,448 words of Egyptian Arabic and English parallel text enhanced with linguistic tags to indicate word relations. The source data in this release consists of discussion forum threads harvested from the Internet by LDC using a combination of manual and automatic processes and is released as BOLT Arabic Discussion Forums (LDC2018T10). The BOLT word alignment task was built on treebank annotation. Egyptian source tree tokens for word alignment were automatically extracted from tree files of BOLT Egyptian Arabic Treebank annotation on the discussion forum data. Human annotators then followed LDC guidelines to link words and phrases in Arabic to those in English. BOLT Egyptian-English Word Alignment -- Discussion Forum Training is distributed via web download. 2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $1,750. * (2) Chinese Abstract Meaning Representation 1.0 was developed by Brandeis University and Nanjing Normal University and is comprised of semantic representations of a set of Chinese sentences from the weblog and discussion forum portions of Chinese Treebank 8.0 (LDC2013T21). Annotations were applied to 10,149 sentences, with 176 sentences unannotated. Abstract Meaning Representation (AMR) captures 'who is doing what to whom' in a sentence. Each sentence is paired with a graph that represents its whole-sentence meaning in a tree structure. Chinese AMR is based on the annotation methodology developed for English with adaptations for handling specific Chinese phenomena. The goal of the Chinese AMR project is to create a large aligned AMR corpus, of which this data set is the first release. For more information about the project, see the Chinese AMR homepage. Chinese Abstract Meaning Representation 1.0 is distributed via web download. 2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $200. * (3) HAVIC MED Progress Test -- Videos, Metadata and Annotation was developed by LDC and is comprised of approximately 3,650 hours of user-generated videos with annotation and metadata. In a collaboration with NIST (the National Institute of Standards and Technology) to advance multimodal event detection and related technologies, LDC developed a large, heterogeneous, annotated multimodal corpus for HAVIC (the Heterogeneous Audio Visual Internet Collection) that was used in the NIST-sponsored MED (Multimedia Event Detection) task for several years. HAVIC MED Progress Test is a subset of that corpus, specifically, a collection of event and background videos originally released to support the 2012-2015 MED tasks. This release consists of videos of various events (event videos) and videos completely unrelated to events (background videos) harvested by a large team of human annotators. Each event video was manually annotated with a set of judgments describing its event properties and other salient features. Background videos were labeled with topic and genre categories. HAVIC MED Progress Test -- Videos, Metadata and Annotation is distributed via hard drive. 2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. This corpus is a members-only release and is not available for non-member licensing. Contact ldc@ldc.upenn.edu for information about membership.
| |||||||||||||||||||||||||||||||||
5-2-2 | ELRA - Language Resources Catalogue - Update (April 2019) ELRA is happy to announce that 4 new Speech resources, 1 new Written Corpus and 1 new Multilingual Lexicon are now available in our catalogue.
ELRA-S0399 GlobalPhone Multilingual Model Package ISLRN: 204-945-263-927-6 The GlobalPhone Multilingual Model Package contains about 22 hours of transcribed read speech spoken by native speakers in 22 languages (Arabic, Bulgarian, Chinese-Mandarin, Chinese-Shanghai, Croatian, Czech, French, German, Hausa, Japanese, Korean, Polish, Portuguese (Brazilian), Russian, Spanish (Latin America), Swahili, Swedish, Tamil, Thai, Turkish, Ukrainian, and Vietnamese). The GlobalPhone Multilingual Model Package covers about 1 hour of transcribed speech from 10 speakers (5 male, 5 female) from each of the above listed 22 languages. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0399/ ELRA-S0400 GlobalPhone 2000 Speaker Package
ISLRN: 331-592-378-424-7 The GlobalPhone 2000 Speaker Package contains transcribed read speech spoken by 2000 native speakers in 22 languages (Arabic, Bulgarian, Chinese-Mandarin, Chinese-Shanghai, Croatian, Czech, French, German, Hausa, Japanese, Korean, Polish, Portuguese (Brazilian), Russian, Spanish (Latin America), Swahili, Swedish, Tamil, Thai, Turkish, Ukrainian, and Vietnamese). The GlobalPhone 2000 Speaker Package covers about 9,000 randomly selected utterances read by 2000 native speakers in 22 languages, i.e. on average 4.5 utterances corresponding to 40 seconds of speech per speaker amounting to a total of 22 hours of speech. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0400/ ELRA-S0402 Speaking atlas of the regional languages of France
ISLRN: 112-393-061-014-3 The Speaking atlas of the regional languages of France offers the same Aesop?s fable read in French and in a number of varieties of languages of France. This work, which has a scientific and heritage dimension, consists in highlighting the linguistic diversity of Metropolitan France and Overseas Territories, through recordings collected in the field and presented via an interactive map, with their orthographic transcription. As far as Occitan is concerned, about sixty varieties were collected in Gascony, Languedoc, Provence, northern Occitania and the Linguistic Crescent. Varieties of Basque, Breton, Frannian, West Flemish, Alsatian, Corsican, Catalan, Francoprovençal and Oïl language(s) are also provided, as well as about fifty languages in the French Overseas and non-territorial languages such as Rromani and the French sign language. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0402/ ELRA-S0403 CLE Pakistan Urdu Speech Corpus
ISLRN: 572-070-066-634-8 This corpus consists of phonetically rich Urdu sentences and additional sentences covering telephone numbers, addresses and personal names. This speech corpus is recorded with a variety of microphone types. Sampling rate of speech files is 16 kHz. Each utterance is stored in a separate file and is accompanied by its orthographic transcription file in Unicode. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0403/ ELRA-W0128 ECPC Corpus (European Comparable and Parallel Corpora of Parliamentary Speeches Archive) ? set 1
ISLRN: 036-939-425-010-1 This corpus is a collection of XML metatextually tagged corpora containing speeches from European chambers. It is a bilingual, bidirectional corpus written corpus in English and Spanish. This first set (ECPC_EP-05) consists of (1) a 'clean' version in XML of European Parliament's 2005 daily sessions; (2) a POS-tagged version of the 2005 daily sessions; and (3) a sentence-based aligned version of 2005 daily sessions. In its raw format, ECPC_EP-05 contains 3,668,476 tokens/words (excluding tagging) in English distributed over 60 utf-8 files and 3,993,867 tokens/words (excluding tagging) in Spanish distributed over 60 utf-8 files. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-W0128/ ELRA-M0051 EnToSSLNE - a Lexicon of Parallel Named Entities from English to South Slavic Languages
ISLRN: 690-348-503-270-1 This lexicon consists of 26,155 parallel named entities in seven languages: English and six South Slavic ones: Bosnian, Bulgarian, Croatian, Macedonian, Serbian and Slovenian. The lexicon contains multiword entries which are not strictly named entities, but contain a word which is. Slovenian, Croatian and Bosnian are written in Latin script, Macedonian and Bulgarian in Cyrillic. Serbian language is specific since it may come in two scripts (Cyrillic and Latin) and two dialects (ekavica and ijekavica). This lexicon takes Serbian ekavica variant and its Cyrillic script. The lexicon comes in two formats: csv and xml. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-M0051/ For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us.
Visit our On-line Catalogue: http://catalog.elra.info Visit the Universal Catalogue: http://universal.elra.info Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements/
| |||||||||||||||||||||||||||||||||
5-2-3 | Speechocean – update (April 2019)
Cantonese Speech Recognition Corpus --- Speechocean
Speechocean: A.I. Data Resource & Service Supplier
At present, we are capable to provide around 8000 hours Cantonese speech recognition corpus, including Mainland Cantonese and Hong Kong Cantonese. Please check the form below: http://kingline.speechocean.com
More Information
If you have any further inquiries, please do not hesitate to contact us. Web: http://en.speechocean.com/ Email: marketing@speechocean.com
| |||||||||||||||||||||||||||||||||
5-2-4 | Google 's Language Model benchmark A LM benchmark is available at:https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark
Here is a brief description of the project.
'The purpose of the project is to make available a standard training and test setup for language modeling experiments. The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here. This also means that your results on this data set are reproducible by the research community at large. Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:
ArXiv paper: http://arxiv.org/abs/1312.3005
Happy benchmarking!'
| |||||||||||||||||||||||||||||||||
5-2-5 | Forensic database of voice recordings of 500+ Australian English speakers Forensic database of voice recordings of 500+ Australian English speakers
| |||||||||||||||||||||||||||||||||
5-2-6 | Audio and Electroglottographic speech recordings
Audio and Electroglottographic speech recordings from several languages We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'. http://www.phonetics.ucla.edu/voiceproject/voice.html Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project. Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets. All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License. This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College. Pat Keating (UCLA)
| |||||||||||||||||||||||||||||||||
5-2-7 | EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.
In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.
Data from 14 participants are available here: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.
If you have any questions, please contact Frank Rudzicz at frank@cs.toronto.edu.
Best regards, Frank
PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.
| |||||||||||||||||||||||||||||||||
5-2-8 | TORGO data base free for academic use. In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here: http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html.
| |||||||||||||||||||||||||||||||||
5-2-9 | Datatang Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.
Summary of the new datasets (2018) and a brief plan for 2019.
? Speech data (with annotation) that we finished in 2018
?2019 ongoing speech project
On top of the above, there are more planed speech data collections, such as Japanese speech data, children`s speech data, dialect speech data and so on.
What is more, we will continually provide those data at a competitive price with a maintained high accuracy rate.
If you have any questions or need more details, do not hesitate to contact us jessy@datatang.com
It would be possible to send you with a sample or specification of the data.
| |||||||||||||||||||||||||||||||||
5-2-10 | Fearless Steps Corpus (University of Texas, Dallas) Fearless Steps Corpus John H.L. Hansen, Abhijeet Sangwan, Lakshmish Kaushik, Chengzhu Yu Center for Robust Speech Systems (CRSS), Eric Jonsson School of Engineering, The University of Texas at Dallas (UTD), Richardson, Texas, U.S.A.
| |||||||||||||||||||||||||||||||||
5-2-11 | SIWIS French Speech Synthesis Database The SIWIS French Speech Synthesis Database includes high quality French speech recordings and associated text files, aimed at building TTS systems, investigate multiple styles, and emphasis. A total of 9750 utterances from various sources such as parliament debates and novels were uttered by a professional French voice talent. A subset of the database contains emphasised words in many different contexts. The database includes more than ten hours of speech data and is freely available.
| |||||||||||||||||||||||||||||||||
5-2-12 | JLCorpus - Emotional Speech corpus with primary and secondary emotions JLCorpus - Emotional Speech corpus with primary and secondary emotions:
For further understanding the wide array of emotions embedded in human speech, we are introducing an emotional speech corpus. In contrast to the existing speech corpora, this corpus was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions along with 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI), where the aim is to model natural conversations among humans and robots. But there are very few existing speech resources to study these emotions,and this work adds a speech corpus containing some secondary emotions. Please use the corpus for emotional speech related studies. When you use it please include the citation as: Jesin James, Li Tian, Catherine Watson, 'An Open Source Emotional Speech Corpus for Human Robot Interaction Applications', in Proc. Interspeech, 2018. To access the whole corpus including the recording supporting files, click the following link: https://www.kaggle.com/tli725/jl-corpus, (if you have already installed the Kaggle API, you can type the following command to download: kaggle datasets download -d tli725/jl-corpus) Or if you simply want the raw audio+txt files, click the following link: https://www.kaggle.com/tli725/jl-corpus/downloads/Raw%20JL%20corpus%20(unchecked%20and%20unannotated).rar/4 The corpus was evaluated by a large scale human perception test with 120 participants. The link to the survey are here- For Primary emorion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_8ewmOCgOFCHpAj3 For Secondary emotion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_eVDINp8WkKpsPsh These surveys will give an overall idea about the type of recordings in the corpus. The perceptually verified and annotated JL corpus will be given public access soon.
|