ISCApad #249 |
Monday, March 11, 2019 by Chris Wellekens |
5-1-1 | Pejman Mowlaee et al., 'Phase-Aware Signal Processing in Speech Communication: Theory and Practice', Wiley 2016Phase-Aware Signal Processing in Speech Communication: Theory and Practice
http://eu.wiley.com/WileyCDA/WileyTitle/productCd-1119238811.html
| |||||
5-1-2 | Jean Caelen, Anne Xuereb, 'Dialogue : altérité, interaction, énaction'
Jean Caelen,Anne Xuereb Dialogue : altérité, interaction, énaction Editions universitaires européennes
| |||||
5-1-3 | Bäckström, Tom (with Guillaume Fuchs, Sascha Disch, Christian Uhle and Jeremie Lecomte), 'Speech Coding with Code-Excited Linear Prediction', Springer
| |||||
5-1-4 | Shinji Watanabe, Marc Delcroix, Florian Metze, John R. Hershey (Eds), 'New Era for Robust Seech Recognition', Springer. Shinji Watanabe, Marc Delcroix, Florian Metze, John R. Hershey (Eds), 'New Era for Robust Seech Recognition', Springer. https://link.springer.com/book/10.1007%2F978-3-319-64680-0
| |||||
5-1-5 | Fabrice Marsac, Rudolph Sock, CONSÉCUTIVITÉ ET SIMULTANÉITÉ en Linguistique, Langues et Parole, L'Harmattan,France Nous avons le plaisir de vous annoncer la parution du volume thématique « CONSÉCUTIVITÉ ET SIMULTANÉITÉ en Linguistique, Langues et Parole » dans la Collection Dixit Grammatica (L’Harmattan, France) :
| |||||
5-1-6 | Emmanuel Vincent (Editor), Tuomas Virtanen (Editor), Sharon Gannot (Editor), 'Audio Source Separation and Speech Enhancement', Wiley Emmanuel Vincent (Editor), Tuomas Virtanen (Editor), Sharon Gannot (Editor), Audio Source Separation and Speech Enhancement:
ISBN: 978-1-119-27989-1 October 2018 504 pages
| |||||
5-1-7 | Jen-Tzung Chien, 'Source Separation and Machine Learning', Academic Press Jen-Tzung Chien, 'Source Separation and Machine Learning', Academic Press
| |||||
5-1-8 | Ingo Feldhausen, « Methods in prosody: A Romance language perspective », Language Science Press (open access) Nous sommes heureux de vous annoncer la parution d'un recueil validé par un comité de lecture et consacré aux méthodes de recherche en prosodie. Cet ouvrage est intitulé « Methods in prosody: A Romance language perspective ». Il est publié par Language Science Press, une maison d’édition open access. Le livre peut-être téléchargé gratuitement en cliquant sur le lien suivant : http://langsci-press.org/catalog/book/183 La table des matières est la suivante : --------------------------------------------------------------------------------------------------------- Introduction Foreword I Large corpora and spontaneous speech 1) Using large corpora and computational tools to describe prosody: An 2) Intonation of pronominal subjects in Porteño Spanish: Analysis of II Approaches to prosodic analysis 3) Multimodal analyses of audio-visual information: Some methods and 4) The realizational coefficient: Devising a method for empirically 5) On the role of prosody in disambiguating wh-exclamatives and III Elicitation methods 6) The Discourse Completion Task in Romance prosody research: Status 7) Describing the intonation of speech acts in Brazilian Portuguese: Indexes 263 --------------------------------------------------------------------------------------------------------- N'hésitez pas à diffuser la parution de cet ouvrage auprès de vos collègues qui pourraient s'y intéresser. Bien cordialement, Ingo Feldhausen
|
5-2-1 | Linguistic Data Consortium (LDC) update (February 2019) February 2019 Newsletter In this newsletter:
Only two weeks left to enjoy 2019 membership discounts
Spring 2019 LDC Data Scholarship recipients
LDC’s new language game
New publications: DEFT Chinese Committed Belief Annotation IARPA Babel Lithuanian Language Pack IARPA-babel304b-v1.0b Multi-Language Conversational Telephone Speech 2011 -- Arabic Group _____________________________________________________________________________
Only two weeks left to enjoy 2019 membership discounts There is still time to save on 2019 membership fees. Through March 1, all organizations receive a discount on the 2019 membership fee (up to 10%) when they choose to join or renew. For more information on membership benefits, visit Join LDC.
Spring 2019 LDC Data Scholarship recipients Congratulations to the recipients of LDC's Spring 2019 Data Scholarships: Colin Annand: University of Cincinnati (USA); PhD. Psychology. Colin is awarded a copy of Switchboard-1 Release 2 for his research involving the relationship between speech patterns and conversation content. Si Chen: Huazhong University of Science and Technology (China); B.S. Communication Engineering. Si is awarded a copy of ACE 2005 Multilingual Training Corpus for his work on event extraction. Noor-e-Hira: Fatima Jinnah Women University (Pakistan); MSc. Computer Sciences. Noor is awarded a copy of NIST 2008 Open Machine Translation (OpenMT) Evaluation for her research in machine translation. Matthew Roddy: Trinity College Dublin (Ireland); Ph.D. Electrical Engineering. Matthew is awarded copies of 2000 HUB5 English Evaluation Speech and Transcripts for his work in spoken dialogue systems. Ammara Zafar: Fatima Jinnah Women University (Pakistan); MSc Computer Sciences. Ammara awarded a copy of NIST 2009 Open Machine Translation (OpenMT) Evaluation for her research in machine translation. For information about the program, visit the Data Scholarship page. LDC’s new language game LDC’s new language game, NameThatLanguage, tests your skill at recognizing the language spoken in short audio clips. The game includes thousands of clips to prevent memorization and offers a real challenge that increases as you progress. In addition to being fun, the game provides useful data on language confusability and linguistic diversity. Game results will be shared freely for research. New clips and more languages continue to be added providing ongoing challenges and new research data. Help support language research by playing! https://namethatlanguage.org _____________________________________________________________________________
New publications:
(1) DEFT Chinese Committed Belief Annotation was developed by LDC and consists of approximately 83,000 tokens of Chinese discussion forum text annotated for 'committed belief,' which marks the level of commitment displayed by the author to the truth of the propositions expressed in the text.
DARPA's Deep Exploration and Filtering of Text (DEFT) program aimed to address remaining capability gaps in state-of-the-art natural language processing technologies related to inference, causal relationships, and anomaly detection. LDC supported the DEFT program by collecting, creating, and annotating a variety of data sources.
DEFT Chinese Committed Belief Annotation is distributed via web download.
2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $1000.
*
(2) IARPA Babel Lithuanian Language Pack IARPA-babel304b-v1.0b was developed by Appen for the IARPA (Intelligence Advanced Research Projects Activity) Babel program. It contains approximately 210 hours of Lithuanian conversational and scripted telephone speech collected in 2013 and 2014 along with corresponding transcripts.
The Lithuanian speech in this release represents that spoken in the Auk?taitian and Samogitian dialect regions of Lithuania. The gender distribution among speakers is approximately equal; speakers' ages range from 16 years to 71 years. Calls were made using different telephones (e.g., mobile, landline) from a variety of environments including the street, a home or office, a public place, and inside a vehicle.
IARPA Babel Lithuanian Language Pack IARPA-babel304b-v1.0b is distributed via web download.
2019 Subscription Members will receive copies of this corpus provided they have submitted a completed copy of the special license agreement. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $25.
*
(3) Multi-Language Conversational Telephone Speech 2011 -- Arabic Group was developed by LDC and is comprised of approximately 117 hours of telephone speech in distinct dialects of colloquial Arabic: Iraqi, Levantine and Maghrebi.
The data were collected primarily to support research and technology evaluation in automatic language identification, and portions of these telephone calls were used in the NIST 2011 Language Recognition Evaluation (LRE). LRE 2011 focused on language pair discrimination for 24 languages/dialects, some of which could be considered mutually intelligible or closely related.
Multi-Language Conversational Telephone Speech 2011 -- Arabic Group is distributed via web download.
2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for $2500.
*
(4) Multilingual ATIS was developed by Google Inc. and consists of 5,871 utterances from ATIS2 (LDC93S5), ATIS3 Training Data (LDC94S19), and ATIS3 Test Data (LDC95S26) annotated and translated into Hindi and Turkish.
The ATIS (Air Travel Information Services) collection was developed to support the research and development of speech understanding systems. Participants were presented with various hypothetical travel planning scenarios and asked to solve them by interacting with partially or completely automated ATIS systems. The resulting utterances were recorded and transcribed. Data was collected in the early 1990s at five US sites: Raytheon BBN, Carnegie Mellon University, MIT Laboratory for Computer Science, National Institute for Standards and Technology, and SRI International.
The original English utterances were manually translated into Hindi and Turkish. This release also includes the original English utterance and the machine translation back into English of the manual target language utterance translation. Each utterance is annotated with named entities via table lookup; markers include city, airline, airport names, and dates.
Multilingual ATIS is distributed via web download.
2019 Subscription Members will automatically receive copies of this corpus. 2019 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data at no cost.
Membership Office University of Pennsylvania T: +1-215-573-1275 E: ldc@ldc.upenn.edu M: 3600 Market St. Suite 810 Philadelphia, PA 19104
*
*
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-2 | ELRA - Language Resources Catalogue - Update (October 2018) ELRA - Language Resources Catalogue - Update
-------------------------------------------------------
We are happy to announce that 2 new Written Corpora and 4 new Speech resources are now available in our catalogue. ELRA-W0126 Training and test data for Arabizi detection and transliteration ISLRN: 986-364-744-303-9 The dataset is composed of : a collection of mixed English and Arabizi text intended to train and test a system for the automatic detection of code-switching in mixed English and Arabizi texts ; and a set of 3,452 Arabizi tokens manually transliterated into Arabic, intended to train and test a system that performs Arabizi to Arabic transliteration. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-W0126/ ELRA-W0127 Normalized Arabic Fragments for Inestimable Stemming (NAFIS)
ISLRN: 305-450-745-774-1 This is an Arabic stemming gold standard corpus composed by a collection of 37 sentences, selected to be representative of Arabic stemming tasks and manually annotated. Compiled sentences belong to various sources (poems, holy Quran, books, and periodics) of diversified kinds (proverb and dictum, article commentary, religious text, literature, historical fiction). NAFIS is represented according to the TEI standard. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-W0127/ ELRA-S0396 Mbochi speech corpus ISLRN: 747-055-093-447-8 This corpus consists of 5131 sentences recorded in Mbochi, together with their transcription and French translation, as well as the results from the work made during JSALT workshop: alignments at the phonetic level and various results of unsupervised word segmentation from audio. The audio corpus is made up of 4,5 hours, downsampled at 16kHz, 16bits, with Linear PCM encoding. Data is distributed into 2 parts, one for training consisting of 4617 sentences, and one for development consisting of 514 sentences. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0396/ ELRA-S0397 Chinese Mandarin (South) database ISLRN: 503-886-852-083-2 This database contains the recordings of 1000 Chinese Mandarin speakers from Southern China (500 males and 500 females), from 18 to 60 years? old, recorded in quiet studios. Recordings were made through microphone headsets and consist of 341 hours of audio data (about 30 minutes per speaker), stored in .WAV files as sequences of 48 KHz Mono, 16 bits, Linear PCM. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0397/ ELRA-S0398 Chinese Mandarin (North) database
ISLRN: 353-548-770-894-7 This database contains the recordings of 500 Chinese Mandarin speakers from Northern China (250 males and 250 females), from 18 to 60 years? old, recorded in quiet studios. Recordings were made through microphone headsets and consist of 172 hours of audio data (about 30 minutes per speaker), stored in .WAV files as sequences of 48 KHz Mono, 16 bits, Linear PCM. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0398/ ELRA-S0401 Persian Audio Dictionary ISLRN: 133-181-128-420-9 This dictionary consists of more than 50,000 entries (along with almost all wordforms and proper names) with corresponding audio files in MP3 and English transliterations. The words have been recorded with standard Persian (Farsi) pronunciation (all by a single speaker). This dictionary is provided with its software. For more information, see: http://catalog.elra.info/en-us/repository/browse/ELRA-S0401/ For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us. Visit the Universal Catalogue: http://universal.elra.info Archives of ELRA Language Resources Catalogue Updates: http://www.elra.info/en/catalogues/language-resources-announcements/
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-3 | Speechocean – update (March 2019)
Accented English Speech Recognition Corpus --- Speechocean
Speechocean: The World’s Leading A.I. Data Resource & Service Supplier
At present, we are capable to provide data services with 110+ languages and dialects across the world. For more detailed information, please visit our website: http://kingline.speechocean.com
More Information
Contact Us: Email: contact@speechocean.com
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-4 | Google 's Language Model benchmark A LM benchmark is available at:https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark
Here is a brief description of the project.
'The purpose of the project is to make available a standard training and test setup for language modeling experiments. The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here. This also means that your results on this data set are reproducible by the research community at large. Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:
ArXiv paper: http://arxiv.org/abs/1312.3005
Happy benchmarking!'
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-5 | Forensic database of voice recordings of 500+ Australian English speakers Forensic database of voice recordings of 500+ Australian English speakers
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-6 | Audio and Electroglottographic speech recordings
Audio and Electroglottographic speech recordings from several languages We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'. http://www.phonetics.ucla.edu/voiceproject/voice.html Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project. Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets. All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License. This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College. Pat Keating (UCLA)
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-7 | EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.
In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.
Data from 14 participants are available here: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.
If you have any questions, please contact Frank Rudzicz at frank@cs.toronto.edu.
Best regards, Frank
PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-8 | TORGO data base free for academic use. In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here: http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html.
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-9 | Datatang Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.
Summary of the new datasets (2018) and a brief plan for 2019.
? Speech data (with annotation) that we finished in 2018
?2019 ongoing speech project
On top of the above, there are more planed speech data collections, such as Japanese speech data, children`s speech data, dialect speech data and so on.
What is more, we will continually provide those data at a competitive price with a maintained high accuracy rate.
If you have any questions or need more details, do not hesitate to contact us jessy@datatang.com
It would be possible to send you with a sample or specification of the data.
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-10 | Fearless Steps Corpus (University of Texas, Dallas) Fearless Steps Corpus John H.L. Hansen, Abhijeet Sangwan, Lakshmish Kaushik, Chengzhu Yu Center for Robust Speech Systems (CRSS), Eric Jonsson School of Engineering, The University of Texas at Dallas (UTD), Richardson, Texas, U.S.A.
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-11 | SIWIS French Speech Synthesis Database The SIWIS French Speech Synthesis Database includes high quality French speech recordings and associated text files, aimed at building TTS systems, investigate multiple styles, and emphasis. A total of 9750 utterances from various sources such as parliament debates and novels were uttered by a professional French voice talent. A subset of the database contains emphasised words in many different contexts. The database includes more than ten hours of speech data and is freely available.
| ||||||||||||||||||||||||||||||||||||||||||||
5-2-12 | JLCorpus - Emotional Speech corpus with primary and secondary emotions JLCorpus - Emotional Speech corpus with primary and secondary emotions:
For further understanding the wide array of emotions embedded in human speech, we are introducing an emotional speech corpus. In contrast to the existing speech corpora, this corpus was constructed by maintaining an equal distribution of 4 long vowels in New Zealand English. This balance is to facilitate emotion related formant and glottal source feature comparison studies. Also, the corpus has 5 secondary emotions along with 5 primary emotions. Secondary emotions are important in Human-Robot Interaction (HRI), where the aim is to model natural conversations among humans and robots. But there are very few existing speech resources to study these emotions,and this work adds a speech corpus containing some secondary emotions. Please use the corpus for emotional speech related studies. When you use it please include the citation as: Jesin James, Li Tian, Catherine Watson, 'An Open Source Emotional Speech Corpus for Human Robot Interaction Applications', in Proc. Interspeech, 2018. To access the whole corpus including the recording supporting files, click the following link: https://www.kaggle.com/tli725/jl-corpus, (if you have already installed the Kaggle API, you can type the following command to download: kaggle datasets download -d tli725/jl-corpus) Or if you simply want the raw audio+txt files, click the following link: https://www.kaggle.com/tli725/jl-corpus/downloads/Raw%20JL%20corpus%20(unchecked%20and%20unannotated).rar/4 The corpus was evaluated by a large scale human perception test with 120 participants. The link to the survey are here- For Primary emorion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_8ewmOCgOFCHpAj3 For Secondary emotion corpus: https://auckland.au1.qualtrics.com/jfe/form/SV_eVDINp8WkKpsPsh These surveys will give an overall idea about the type of recordings in the corpus. The perceptually verified and annotated JL corpus will be given public access soon.
|
5-3-1 | Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox).Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox). http://bass-db.gforge.inria.fr/fasst/ This toolbox is intended to speed up the conception and to automate the implementation of new model-based audio source separation algorithms. It has the following additions compared to version 1: * Core in C++ * User scripts in MATLAB or python * Speedup * Multichannel audio input We provide 2 examples: 1. two-channel instantaneous NMF 2. real-world speech enhancement (2nd CHiME Challenge, Track 1)
| |||||
5-3-2 | Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures. We are glad to announce the public realease of the Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures. It can be used e.g. for making music or for singing voice pedagogy. A wide variety of voices are available, from the classic vocal quartet (soprano, alto, tenor, bass), to the extreme colors of childish, breathy, roaring, etc. voices. All the features of vocal sounds are entirely under control, as the synthesis method is based on a mathematic model of voice production, without prerecording segments. The instrument is controlled using chironomy, i.e. hand gestures, with the help of interfaces like stylus or fingers on a graphic tablet, or computer mouse. Vocal dimensions such as the melody, vocal effort, vowel, voice tension, vocal tract size, breathiness etc. can easily and continuously be controlled during performance, and special voices can be prepared in advance or using presets. Check out the capabilities of Cantor Digitalis, through performances extracts from the ensemble Chorus Digitalis: http://youtu.be/_LTjM3Lihis?t=13s. In pratice, this release provides:
Regards,
The Cantor Digitalis team (who loves feedback — cantordigitalis@limsi.fr) Christophe d'Alessandro, Lionel Feugère, Olivier Perrotin http://cantordigitalis.limsi.fr/
| |||||
5-3-3 | MultiVec: a Multilingual and MultiLevel Representation Learning Toolkit for NLP
We are happy to announce the release of our new toolkit “MultiVec” for computing continuous representations for text at different granularity levels (word-level or sequences of words). MultiVec includes Mikolov et al. [2013b]’s word2vec features, Le and Mikolov [2014]’s paragraph vector (batch and online) and Luong et al. [2015]’s model for bilingual distributed representations. MultiVec also includes different distance measures between words and sequences of words. The toolkit is written in C++ and is aimed at being fast (in the same order of magnitude as word2vec), easy to use, and easy to extend. It has been evaluated on several NLP tasks: the analogical reasoning task, sentiment analysis, and crosslingual document classification. The toolkit also includes C++ and Python libraries, that you can use to query bilingual and monolingual models.
The project is fully open to future contributions. The code is provided on the project webpage (https://github.com/eske/multivec) with installation instructions and command-line usage examples.
When you use this toolkit, please cite:
@InProceedings{MultiVecLREC2016, Title = {{MultiVec: a Multilingual and MultiLevel Representation Learning Toolkit for NLP}}, Author = {Alexandre Bérard and Christophe Servan and Olivier Pietquin and Laurent Besacier}, Booktitle = {The 10th edition of the Language Resources and Evaluation Conference (LREC 2016)}, Year = {2016}, Month = {May} }
The paper is available here: https://github.com/eske/multivec/raw/master/docs/Berard_and_al-MultiVec_a_Multilingual_and_Multilevel_Representation_Learning_Toolkit_for_NLP-LREC2016.pdf
Best regards,
Alexandre Bérard, Christophe Servan, Olivier Pietquin and Laurent Besacier
| |||||
5-3-4 | An android application for speech data collection LIG_AIKUMA We are pleased to announce the release of LIG_AIKUMA, an android application for speech data collection, specially dedicated to language documentation. LIG_AIKUMA is an improved version of the Android application (AIKUMA) initially developed by Steven Bird and colleagues. Features were added to the app in order to facilitate the collection of parallel speech data in line with the requirements of a French-German project (ANR/DFG BULB - Breaking the Unwritten Language Barrier).
The resulting app, called LIG-AIKUMA, runs on various mobile phones and tablets and proposes a range of different speech collection modes (recording, respeaking, translation and elicitation). It was used for field data collections in Congo-Brazzaville resulting in a total of over 80 hours of speech.
Users who just want to use the app without access to the code can download it directly from the forge direct link: https://forge.imag.fr/frs/download.php/706/MainActivity.apk
Code is also available on demand (contact elodie.gauthier@imag.fr and laurent.besacier@imag.fr).
More details on LIG_AIKUMA can be found on the following paper: http://www.sciencedirect.com/science/article/pii/S1877050916300448
| |||||
5-3-5 | Web services via ALL GO from IRISA-CNRS It is our pleasure to introduce A||GO (https://allgo.inria.fr/ or http://allgo.irisa.fr/), a platform providing a collection of web-services for the automatic analysis of various data, including multimedia content across modalities. The platform builds on the back-end web service deployment infrastructure developed and maintained by Inria?s Service for Experimentation and Development (SED). Originally dedicated to multimedia content, A||GO progressively broadened to other fields such as computational biology, networks and telecommunications, computational graphics or computational physics.
| |||||
5-3-6 | Clickable map - Illustrations of the IPA Clickable map - Illustrations of the IPA
| |||||
5-3-7 | LIG-Aikuma running on mobile phones and tablets
| |||||
5-3-8 | Python Library Nous sommes heureux d'annoncer la mise à disposition du public de la
première bibliothèque en langage Python pour convertir des nombres écrits en
français en leur représentation en chiffres.
L'analyseur est robuste et est capable de segmenter et substituer les expressions
de nombre dans un flux de mots, comme une conversation par exemple. Il reconnaît les différentes
variantes de la langue (quantre-vingt-dix / nonante?) et traduit aussi bien les
ordinaux que les entiers, les nombres décimaux et les séquences formelles (n° de téléphone, CB?).
Nous espérons que cet outil sera utile à celles et ceux qui, comme nous, font du traitment
du langage naturel en français.
Cette bibliothèque est diffusée sous license MIT qui permet une utilisation très libre.
Sources : https://github.com/allo-media/text2num
|