ISCApad #218 |
Wednesday, August 10, 2016 by Chris Wellekens |
5-1-1 | Pierre-Yves Oudeyer, 'Aux sources de la parole: auto-organisation et évolution', Odile Jacob Pierre-Yves Oudeyer, dir. rech. Inria, vient de publier 'Aux sources de la parole: auto-organisation et évolution', chez Odile Jacob (Sept. 2013).
Il discute de la question de l'évolution et de l'acquisition de la parole, chez l'enfant et chez les robots.
En faisant dialoguer biologie, linguistique, neurosciences et expériences robotiques,
ce livre étudie en particulier les phénomènes d'auto-organisation, permettant la formation spontanée de langues nouvelles dans une population d'individus.
Il présente en particulier des expériences dans lesquelles une population de robots numériques invente, forme, et négotie son propre système de parole
et explique comment de telles expériences robotiques peuvent nous aider à mieux comprendre l'homme.
Il présente aussi des expérimentations robotiques récentes, et à partir de perspectives nouvelles en intelligence artificielle, dans lesquelles des mécanismes de curiosité permettent à un robot de découvrir par lui-même son corps, les objets qui l'entourent, et finalement les interactions vocales avec ses pairs. C'est ainsi que s'auto-organise son propre développement cognitif, et qu'apparaissent des hypothèses nouvelles pour comprendre le développement chez l'enfant.
Site web du livre: http://goo.gl/A6EwTJ
Pierre-Yves Oudeyer,
Directeur de recherche, Inria
Responsable de l'équipe Flowers
Inria Bordeaux Sud-Ouest et Ensta-ParisTech, France
Twitter: https://twitter.com/pyoudeyer
| ||||||||||||||
5-1-2 | Björn Schuller, Anton Batliner , Computational Paralinguistics: Emotion, Affect and Personality in Speech and Language Processing, Wiley, ISBN: 978-1-119-97136-8, 344 pages, November 2013
| ||||||||||||||
5-1-3 | Li Deng and Dong Yu, Deep Learning: Methods and Applications, Foundations and Trends in Signal ProcessingFoundations and Trends in Signal Processing (www.nowpublishers.com/sig) has published the following issue: Volume 7, Issue 3-4 Deep Learning: Methods and Applications By Li Deng and Dong Yu (Microsoft Research, USA) http://dx.doi.org/10.1561/2000000039
| ||||||||||||||
5-1-4 | O.Niebuhr, R.Skarnitzl, 'Tackling the Complexity in Speech', Prague University PressTackling the Complexity in Speech
The present volume is meant to give the reader an impression of the range of questions and topics that are currently subject of international research in the discovery of complexity, the organization of complexity, and the modelling of complexity. These are the main sections of our volume. Each section includes four carefully selected chapters. They deal with facets of speech production, speech acoustics, and/or speech perception or recognition, place them in an integrated phonetic-phonological perspective, and relate them in more or less explicit ways to aspects of speech technology. Therefore, we hope that this volume can help speech scientists with traditional training in phonetics and phonology to keep up with the latest developments in speech technology. In the opposite direction, speech researchers starting from a technological perspective will hopefully get inspired by reading about the questions, phenomena, and communicative functions that are currently addressed in phonetics and phonology. Either way, the future of speech research lies in international, interdisciplinary collaborations, and our volume is meant to reflect and facilitate such collaborations. https://e-shop.ff.cuni.cz/books/monographs_eng/opera_facultatis_philosophicae_eng/tackling_the_complexity_in_spee_-1153
| ||||||||||||||
5-1-5 | J.Li, L.Deng, R.Haeb-Umbach and Y.Gong, 'Robust Automatic Speech Recognition', Academic Press 'Robust Automatic Speech Recognition'
| ||||||||||||||
5-1-6 | Barbosa, P. A. and Madureira, S. Manual de Fonética Acústica Experimental. Aplicações a dados do português. 591 p. São Paulo: Cortez, 2015. [In Portuguese] Barbosa, P. A. and Madureira, S. Manual de Fonética Acústica Experimental. Aplicações a dados do português. 591 p. São Paulo: Cortez, 2015. [In Portuguese]
| ||||||||||||||
5-1-7 | Damien Nouvel, Inalco, Maud Ehrmann, EPFL,Sophie Rosset, CNRS. Les entités nommées pour le traitement automatique des langues Les entités nommées pour le traitement automatique des langues
| ||||||||||||||
5-1-8 | R.Fuchs, 'Speech Rhythm in Varieties of English' , Springer R.Fuchs, 'Speech Rhythm in Varieties of English' has appeared with Springer, in the 'Prosody, Phonology and Phonetics' series: https://www.springer.com/gp/book/9783662478172
|
5-2-1 | ELRA - Language Resources Catalogue - Update (July 2016) ELRA - Language Resources Catalogue - Update We are happy to announce that 2 new Terminological Resources with related 2 Speech Resources are now available in our catalogue.
This multilingual lexicon covers Real Life Daily vocabulary in 32 languages. It contains 10,000 words for each language, XML-annotated for part-of-speech, gender, irregular forms and disambiguating information for homographs, and 10,000 additional headwords for 12 languages. For more information, see: http://catalog.elra.info/product_info.php?products_id=1272 This multilingual dataset covers Real Life Daily vocabulary in 28 languages. It contains 2,000 phrases for each language, organised under 12 topics and 67 subtopics. Romanization is provided for Arabic, Farsi and Hindi. For more information, see: http://catalog.elra.info/product_info.php?products_id=1273 This multilingual lexicon covers Real Life Daily vocabulary in 26 languages. It contains 10,000 words for each language, XML-annotated for part-of-speech, gender, irregular forms and disambiguating information for homographs, with the corresponding audio files recorded by a native speaker and 10,000 additional headwords with audio for 12 languages. For more information, see: http://catalog.elra.info/product_info.php?products_id=1274 ELRA-S0383 Collins Multilingual database (MLD) ? PhraseBank with audio files
This multilingual dataset covers Real Life Daily vocabulary in 28 languages. It contains 2,000 phrases for each language, organised under 12 topics and 67 subtopics, and the corresponding audio files recorded by a native speaker.ISLRN: 398-655-047-044-5 For more information, see: http://catalog.elra.info/product_info.php?products_id=1275 For more information on the catalogue, please contact Valérie Mapelli mailto:mapelli@elda.org If you would like to enquire about having your resources distributed by ELRA, please do not hesitate to contact us. Visit our On-line Catalogue: http://catalog.elra.info
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-2 | LDC Newsletter (July 2016) In this Newsletter: Fall 2016 Data Scholarship Program 2015 User Survey Results New Publications: English Speed Networking Conversational Transcripts
Digital Archive of Southern Speech - NLP Version (DASS)
GALE Phase 3 and 4 Chinese Broadcast News Parallel Text
IARPA Babel Cantonese Language Pack IARPA-babel101b-v0.4c
Fall 2016 Data Scholarship Program
2015 User Survey Results LDC conducted its fourth user survey in December 2015. This survey built on the previous surveys conducted in 2006, 2007 and 2012 to assess user sentiment and also asked for the evaluation of key LDC-related topics including: · Opinions on the new website and usability of the Catalog · Use and satisfaction with the enhanced user services and e-commerce system · LDC?s Data Management Plan capabilities · Suggestions for future publications and preferred data delivery methods · Use of web services for data access and processing Overall, survey respondents were satisfied with LDC?s data, membership options, website, Catalog and enhanced user services. Participants cited the top five most useful corpora received between 2012 and 2015 as OntoNotes Release 5.0, TIMIT, TAC KBP Reference Knowledge Base, Penn Discourse Treebank V 2.0, and Multi-Channel WSJ Audio. Three fourths of respondents prefer digital delivery of data and the top three languages for current research demands were identified as English, Chinese and Spanish. We thank everyone who participated in this survey. Responses will benefit the future of the Consortium and will help LDC to better meet the needs of our members and data licensees.
New Publications
(1) English Speed Networking Conversational Transcripts was developed at the University of the West of England and contains 388 transcripts of English face-to-face and instant messaging conversations about business ideas collected in 2014 and 2015 from participants (undergraduate students) playing different power roles.
This corpus was created to examine communication accommodation, specifically, the ways in which an individual's linguistic style is affected by social power and personality. The data was collected in two studies. In the first study, 40 participants had a series of paired five minute face-to-face conversations playing either a high, low or neutral power role. The same procedure was followed in the second study except that participants discussed business ideas via instant messaging.
The face-to-face conversations were audio-recorded and transcribed verbatim.
All transcripts are presented as UTF-8 plain text files.
English Speed Networking Conversational Transcripts is distributed via web download. 2016 Subscription Members will automatically receive two copies of this corpus. 2016 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US $400.00
(2) Digital Archive of Southern Speech - NLP Version (DASS-NLP) was developed by LDC as an alternate version of Digital Archive of Southern Speech (DASS) (LDC2012S03) suitable for natural language processing and human language technology applications. Specifically, the original audio files have been converted to 16kHz 16-bit flac compressed wav and file names have been normalized to facilitate automatic processing.
DASS was developed by the University of Georgia. It is a subset of the Linguistic Atlas of the Gulf States (LAGS), which is in turn part of the Linguist Atlas Project (LAP). DASS-NLP contains approximately 366 hours of English speech data from 30 female speakers and 34 male speakers, along with associated metadata about the speakers, the recordings and maps in .jpeg format relating to the recording locations.
LAP consists of a set of survey research projects about the words and pronunciation of everyday American English, the largest project of its kind in the United States. Interviews with thousands of native speakers across the country have been carried out since 1929. LAGS surveyed the everyday speech of Georgia, Tennessee, Florida, Alabama, Mississippi, Arkansas, Louisiana, and Texas in a series of 914 audio-taped interviews conducted from 1968-1983.
The speakers' average age is 61 years; there are 30 women and 34 men from the Gulf States region represented in this release. The interviews cover common topics such as family, the weather, household articles and activities, agriculture and social conditions.
Digital Archive of Southern Speech - NLP Version is distributed via web download.
2016 Not-for-Profit Subscription Members will automatically receive two copies of this corpus. 2016 For-Profit Subscription Members will receive two copies provided they have submitted a completed copy of the For-Profit Member User License Agreement for Digital Archive of Southern Speech ? NLP Version (LDC2016S05). 2016 Standard Members may request a copy as part of their 16 free membership corpora. This data is being made available at no-cost for non-member organizations under a research license.
(3) GALE Phase 3 and 4 Chinese Broadcast News Parallel Text was developed by LDC. Along with other corpora, the parallel text in this release comprised training data for Phases 3 and 4 of the DARPA GALE (Global Autonomous Language Exploitation) Program. This corpus contains Chinese source text and corresponding English translations selected from broadcast news data collected by LDC between 2006 and 2008 and transcribed and translated by LDC or under its direction.
GALE Phase 3 and 4 Chinese Broadcast News Parallel Text includes 76 source-translation document pairs, comprising 614,608 tokens of Chinese source text and its English translation. Data is drawn from 16 distinct Chinese programs broadcast between 2006 and 2008 by China Central TV, a national and international broadcaster in Mainland China and Phoenix TV, a Hong Kong-based satellite television station. The programs in this release feature news programs on current events topics.
The files in this release were transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with the Quick Rich Transcription guidelines developed by LDC.
Source data and translations are distributed in TDF format. All data are encoded in UTF-8.
GALE Phase 3 and 4 Chinese Broadcast News Parallel is distributed via web download
2016 Subscription Members will automatically receive two copies of this corpus. 2016 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US $1750.00
(4) IARPA Babel Cantonese Language Pack IARPA-babel101b-v0.4c was developed by Appen for the IARPA (Intelligence Advanced Research Projects Activity) Babel program. It contains approximately 215 hours of Cantonese conversational and scripted telephone speech collected in 2011 along with corresponding transcripts.
The Babel program focuses on underserved languages and seeks to develop speech recognition technology that can be rapidly applied to any human language to support keyword search performance over large amounts of recorded speech.
The Cantonese speech in this release represents that spoken in the Chinese provinces of Guangdong and Guangxi, and within those provinces, among five dialect groups. The gender distribution among speakers is approximately even; speakers' ages range from 16 years to 67 years. Calls were made using different telephones (e.g., mobile, landline) from a variety of environments including the street, a home or office, a public place, and inside a vehicle.
All audio data is presented as 8kHz 8-bit a-law encoded audio in sphere format. Transcripts are available in two versions: simplified Chinese characters and a romanization scheme based on the Yale system, both encoded in UTF-8.
IARPA Babel Cantonese Language Pack IARPA is distributed via web download
2016 Subscription Members will receive two copies of this corpus provided they have submitted a completed copy of the IARPA User Agreement for Not-for-Profit Members or the IARPA User Agreement for For-Profit Members. 2016 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US $25.00 under a research license.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-3 | Appen ButlerHill
Appen ButlerHill A global leader in linguistic technology solutions RECENT CATALOG ADDITIONS—MARCH 2012 1. Speech Databases 1.1 Telephony
2. Pronunciation Lexica Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include: Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate) Part-of-speech tagged Lexica providing grammatical and semantic labels Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists. Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com
4. Other Language Resources Morphological Analyzers – Farsi/Persian & Urdu Arabic Thesaurus Language Analysis Documentation – multiple languages
For additional information on these resources, please contact: sales@appenbutlerhill.com 5. Customized Requests and Package Configurations Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations. We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-4 | OFROM 1er corpus de français de Suisse romande Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions.
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-5 | Real-world 16-channel noise recordings We are happy to announce the release of DEMAND, a set of real-world
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-6 | Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne Aide à la finalisation de corpus oraux ou multimodaux pour diffusion, valorisation et dépôt pérenne
Le consortium IRCOM de la TGIR Corpus et l’EquipEx ORTOLANG s’associent pour proposer une aide technique et financière à la finalisation de corpus de données orales ou multimodales à des fins de diffusion et pérennisation par l’intermédiaire de l’EquipEx ORTOLANG. Cet appel ne concerne pas la création de nouveaux corpus mais la finalisation de corpus existants et non-disponibles de manière électronique. Par finalisation, nous entendons le dépôt auprès d’un entrepôt numérique public, et l’entrée dans un circuit d’archivage pérenne. De cette façon, les données de parole qui ont été enrichies par vos recherches vont pouvoir être réutilisées, citées et enrichies à leur tour de manière cumulative pour permettre le développement de nouvelles connaissances, selon les conditions d’utilisation que vous choisirez (sélection de licences d’utilisation correspondant à chacun des corpus déposés).
Cet appel d’offre est soumis à plusieurs conditions (voir ci-dessous) et l’aide financière par projet est limitée à 3000 euros. Les demandes seront traitées dans l’ordre où elles seront reçues par l’ IRCOM. Les demandes émanant d’EA ou de petites équipes ne disposant pas de support technique « corpus » seront traitées prioritairement. Les demandes sont à déposer du 1er septembre 2013 au 31 octobre 2013. La décision de financement relèvera du comité de pilotage d’IRCOM. Les demandes non traitées en 2013 sont susceptibles de l’être en 2014. Si vous avez des doutes quant à l’éligibilité de votre projet, n’hésitez pas à nous contacter pour que nous puissions étudier votre demande et adapter nos offres futures.
Pour palier la grande disparité dans les niveaux de compétences informatiques des personnes et groupes de travail produisant des corpus, L’ IRCOM propose une aide personnalisée à la finalisation de corpus. Celle-ci sera réalisée par un ingénieur IRCOM en fonction des demandes formulées et adaptées aux types de besoin, qu’ils soient techniques ou financiers.
Les conditions nécessaires pour proposer un corpus à finaliser et obtenir une aide d’IRCOM sont :
Les demandes peuvent concerner tout type de traitement : traitements de corpus quasi-finalisés (conversion, anonymisation), alignement de corpus déjà transcrits, conversion depuis des formats « traitement de textes », digitalisation de support ancien. Pour toute demande exigeant une intervention manuelle importante, les demandeurs devront s’investir en moyens humains ou financiers à la hauteur des moyens fournis par IRCOM et ORTOLANG.
IRCOM est conscient du caractère exceptionnel et exploratoire de cette démarche. Il convient également de rappeler que ce financement est réservé aux corpus déjà largement constitués et ne peuvent intervenir sur des créations ex-nihilo. Pour ces raisons de limitation de moyens, les propositions de corpus les plus avancés dans leur réalisation pourront être traitées en priorité, en accord avec le CP d’IRCOM. Il n’y a toutefois pas de limite « théorique » aux demandes pouvant être faites, IRCOM ayant la possibilité de rediriger les demandes qui ne relèvent pas de ses compétences vers d’autres interlocuteurs.
Les propositions de réponse à cet appel d’offre sont à envoyer à ircom.appel.corpus@gmail.com. Les propositions doivent utiliser le formulaire de deux pages figurant ci-dessous. Dans tous les cas, une réponse personnalisée sera renvoyée par IRCOM.
Ces propositions doivent présenter les corpus proposés, les données sur les droits d’utilisation et de propriétés et sur la nature des formats ou support utilisés.
Cet appel est organisé sous la responsabilité d’IRCOM avec la participation financière conjointe de IRCOM et l’EquipEx ORTOLANG.
Pour toute information complémentaire, nous rappelons que le site web de l'Ircom (http://ircom.corpus-ir.fr) est ouvert et propose des ressources à la communauté : glossaire, inventaire des unités et des corpus, ressources logicielles (tutoriaux, comparatifs, outils de conversion), activités des groupes de travail, actualités des formations, ... L'IRCOM invite les unités à inventorier leur corpus oraux et multimodaux - 70 projets déjà recensés - pour avoir une meilleure visibilité des ressources déjà disponibles même si elles ne sont pas toutes finalisées.
Le comité de pilotage IRCOM
Utiliser ce formulaire pour répondre à l’appel : Merci.
Réponse à l’appel à la finalisation de corpus oral ou multimodal
Nom du corpus :
Nom de la personne à contacter : Adresse email : Numéro de téléphone :
Nature des données de corpus :
Existe-t’il des enregistrements : Quel média ? Audio, vidéo, autre… Quelle est la longueur totale des enregistrements ? Nombre de cassettes, nombre d’heures, etc. Quel type de support ? Quel format (si connu) ?
Existe-t’il des transcriptions : Quel format ? (papier, traitement de texte, logiciel de transcription) Quelle quantité (en heures, nombre de mots, ou nombre de transcriptions) ?
Disposez vous de métadonnées (présentation des droits d’auteurs et d’usage) ?
Disposez-vous d’une description précise des personnes enregistrées ?
Disposez-vous d’une attestation de consentement éclairé pour les personnes ayant été enregistrées ? En quelle année (environ) les enregistrements ont eu lieu ?
Quelle est la langue des enregistrements ?
Le corpus comprend-il des enregistrements d’enfants ou de personnes ayant un trouble du langage ou une pathologie ? Si oui, de quelle population s’agit-il ?
Dans un souci d’efficacité et pour vous conseiller dans les meilleurs délais, il nous faut disposer d’exemples des transcriptions ou des enregistrements en votre possession. Nous vous contacterons à ce sujet, mais vous pouvez d’ores et déjà nous adresser par courrier électronique un exemple des données dont vous disposez (transcriptions, métadonnées, adresse de page web contenant les enregistrements).
Nous vous remercions par avance de l’intérêt que vous porterez à notre proposition. Pour toutes informations complémentaires veuillez contacter Martine Toda martine.toda@ling.cnrs.fr ou à ircom.appel.corpus@gmail.com.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-7 | Rhapsodie: un Treebank prosodique et syntaxique de français parlé Rhapsodie: un Treebank prosodique et syntaxique de français parlé
Nous avons le plaisir d'annoncer que la ressource Rhapsodie, Corpus de français parlé annoté pour la prosodie et la syntaxe, est désormais disponible sur http://www.projet-rhapsodie.fr/
Le treebank Rhapsodie est composé de 57 échantillons sonores (5 minutes en moyenne, au total 3h de parole, 33000 mots) dotés d’une transcription orthographique et phonétique alignées au son.
Il s'agit d’une ressource de français parlé multi genres (parole privée et publique ; monologues et dialogues ; entretiens en face à face vs radiodiffusion, parole plus ou moins interactive et plus ou moins planifiée, séquences descriptives, argumentatives, oratoires et procédurales) articulée autour de sources externes (enregistrements extraits de projets antérieurs, en accord avec les concepteurs initiaux) et internes. Nous tenons en particulier à remercier les responsables des projets CFPP2000, PFC, ESLO, C-Prom ainsi que Mathieu Avanzi, Anne Lacheret, Piet Mertens et Nicolas Obin.
Les échantillons sonores (wave & MP3, pitch nettoyé et lissé), les transcriptions orthographiques (txt), les annotations macrosyntaxiques (txt), les annotations prosodiques (xml, textgrid) ainsi que les metadonnées (xml & html) sont téléchargeables librement selon les termes de la licence Creative Commons Attribution - Pas d’utilisation commerciale - Partage dans les mêmes conditions 3.0 France. Les annotations microsyntaxiques seront disponibles prochainement Les métadonnées sont également explorables en ligne grâce à un browser. Les tutoriels pour la transcription, les annotations et les requêtes sont disponibles sur le site Rhapsodie. Enfin, L’annotation prosodique est interrogeable en ligne grâce au langage de requêtes Rhapsodie QL. L'équipe Ressource Rhapsodie (Modyco, Université Paris Ouest Nanterre) Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong. Partenaires : IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence), CLLE-ERSS (Toulouse).
******************************************************** Rhapsodie: a Prosodic and Syntactic Treebank for Spoken French We are pleased to announce that Rhapsodie, a syntactic and prosodic treebank of spoken French created with the aim of modeling the interface between prosody, syntax and discourse in spoken French is now available at http://www.projet-rhapsodie.fr/ The Rhapsodie treebank is made up of 57 short samples of spoken French (5 minutes long on average, amounting to 3 hours of speech and a 33 000 word corpus) endowed with an orthographical phoneme-aligned transcription . The corpus is representative of different genres (private and public speech; monologues and dialogues; face-to-face interviews and broadcasts; more or less interactive discourse; descriptive, argumentative and procedural samples, variations in planning type). The corpus samples have been mainly drawn from existing corpora of spoken French and partially created within the frame of theRhapsodie project. We would especially like to thank the coordinators of the CFPP2000, PFC, ESLO, C-Prom projects as well as Piet Mertens, Mathieu Avanzi, Anne Lacheret and Nicolas Obin. The sound samples (waves, MP3, cleaned and stylized pitch), the orthographic transcriptions (txt), the macrosyntactic annotations (txt), the prosodic annotations (xml, textgrid) as well as the metadata (xml and html) can be freely downloaded under the terms of the Creative Commons licence Attribution - Noncommercial - Share Alike 3.0 France. Microsyntactic annotation will be available soon. The metadata are searchable on line through a browser. The prosodic annotation can be explored on line through the Rhapsodie Query Language. The tutorials of transcription, annotations and Rhapsodie Query Language are available on the site.
The Rhapsodie team (Modyco, Université Paris Ouest Nanterre : Sylvain Kahane, Anne Lacheret, Paola Pietrandrea, Atanas Tchobanov, Arthur Truong. Partners: IRCAM (Paris), LATTICE (Paris), LPL (Aix-en-Provence),CLLE-ERSS (Toulouse).
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-8 | Annotation of “Hannah and her sisters” by Woody Allen. We have created and made publicly available a dense audio-visual person-oriented ground-truth annotation of a feature movie (100 minutes long): “Hannah and her sisters” by Woody Allen. Jean-Ronan Vigouroux, Louis Chevallier Patrick Pérez Technicolor Research & Innovation
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-9 | French TTS Text to Speech Synthesis:
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-10 | Google 's Language Model benchmark A LM benchmark is available at:https://github.com/ciprian-chelba/1-billion-word-language-modeling-benchmark
Here is a brief description of the project.
'The purpose of the project is to make available a standard training and test setup for language modeling experiments. The training/held-out data was produced from a download at statmt.org using a combination of Bash shell and Perl scripts distributed here. This also means that your results on this data set are reproducible by the research community at large. Besides the scripts needed to rebuild the training/held-out data, it also makes available log-probability values for each word in each of ten held-out data sets, for each of the following baseline models:
ArXiv paper: http://arxiv.org/abs/1312.3005
Happy benchmarking!'
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-11 | International Standard Language Resource Number (ISLRN) (ELRA Press release) Press Release - Immediate - Paris, France, December 13, 2013 Establishing the International Standard Language Resource Number (ISLRN) 12 major NLP organisations announce the establishment of the ISLRN, a Persistent Unique Identifier, to be assigned to each Language Resource. On November 18, 2013, 12 NLP organisations have agreed to announce the establishment of the International Standard Language Resource Number (ISLRN), a Persistent Unique Identifier, to be assigned to each Language Resource. Experiment replicability, an essential feature of scientific work, would be enhanced by such unique identifier. Set up by ELRA, LDC and AFNLP/Oriental-COCOSDA, the ISLRN Portal will provide unique identifiers using a standardised nomenclature, as a service free of charge for all Language Resource providers. It will be supervised by a steering committee composed of representatives of participating organisations and enlarged whenever necessary. More information on ELRA and the ISLRN, please contact: Khalid Choukri choukri@elda.org More information on ELDA, please contact: Hélène Mazo mazo@elda.org ELRA 55-57, rue Brillat Savarin 75013 Paris (France) Tel.: +33 1 43 13 33 33 Fax: +33 1 43 13 33 30
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-12 | ISLRN new portal Opening of the ISLRN Portal
ELRA, LDC, and AFNLP/Oriental-COCOSDA announce the opening of the ISLRN Portal @ www.islrn.org.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-13 | Speechocean – update (May 2016)
Speechocean – update (May 2016):
Speechocean: A global language resources and data services supplier
About Speechocean
Speechocean is one of the world well-known language related resources & services provider in the fields of Human Computer Interaction and Human Language Technology. At present, we can provide data services with 110+ languages and dialects across the world.
KingLine Data Center ---Data Sharing Platform
Kingline Data Center is operated and supervised by Speechocean, which is mainly focused on language resources creating and providing for research and development of human language technology.
These diversified corpora are widely used for the research and development in the fields of Speech Recognition, Speech Synthesis, Natural Language Processing, Machine Translation, Web Search, etc. All corpora are openly accessible for users all over the world, including users from scientific research institutions, enterprises or individuals.
For more detailed information, please visit our website: http://kingline.speechocean.com
New released corpora:
ID: King-ASR-360
This is a 1-channel Taiwanese Mandarin and English mix language mobile phone speech database, which is collected over Samsung mobile phone. This database is owned by Beijing Haitian Ruisheng Science Technology Ltd (SpeechOcean, www.speechocean.com). 1,026 speakers were recorded in total, and each speaker recorded 1 session in one of three different environments: office, restaurant or street. With discarding some unqualified utterances, the whole corpus contains the recordings of 321,890 utterances of Taiwanese mandarin and English mixed language speech data which were from all the speakers. The pure recording time is about 514 hours, including the leading silence (about 500 ms) and the trailing silence (about 500 ms). The total size of this database is 55.2 GB.
ID: King-ASR-148
This is a 3-channel Italian mobile phone speech database, which is collected over 3 different mobile operating systems simultaneously: iOS, Android and Windows phone. This database is owned by Beijing Haitian Ruisheng Science Technology Ltd (SpeechOcean, www.speechocean.com). This database is performed in three different environment: office, restaurant and street. The corpus contains the recordings of 377,535 utterances of Italian speech data which were from 300 speakers. The pure recording time is about 499.7 hours (3-channel), including the leading silence (about 500 ms) and the trailing silence (about 500 ms). The total size of this database is 53.7 GB. A pronunciation lexicon with a phonemic transcription is also included. All the data was transcribed and labeled.
Contact Information
Xianfeng Cheng
VP
Tel: +86-10-62660928; +86-10-62660053 ext.8080
Mobile: +86 13681432590
Skype: xianfeng.cheng1
Email: chengxianfeng@speechocean.com; cxfxy0cxfxy0@gmail.com
Website: www.speechocean.com
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-14 | kidLUCID: London UCL Children’s Clear Speech in Interaction Database kidLUCID: London UCL Children’s Clear Speech in Interaction Database We are delighted to announce the availability of a new corpus of spontaneous speech for children aged 9 to 14 years inclusive, produced as part of the ESRC-funded project on ‘Speaker-controlled Variability in Children's Speech in Interaction’ (PI: Valerie Hazan). Speech recordings (a total of 288 conversations) are available for 96 child participants (46M, 50F, range 9;0 to 15;0 years), all native southern British English speakers. Participants were recorded in pairs while completing the diapix spot-the-difference picture task in which the pair verbally compared two scenes, only one of which was visible to each talker. High-quality digital recordings were made in sound-treated rooms. For each conversation, a stereo audio recording is provided with each speaker on a separate channel together with a Praat Textgrid containing separate word- and phoneme-level segmentations for each speaker. There are six recordings per speaker pair made in the following conditions:
The kidLUCID corpus is available online within the OSCAAR (Online Speech/Corpora Archive and Analysis Resource) archive (https://oscaar.ci.northwestern.edu/). Free access can be requested for research purposes. Further information about the project can be found at: http://www.ucl.ac.uk/pals/research/shaps/research/shaps/research/clear-speech-strategies This work was supported by Economic and Social Research Council Grant No. RES-062- 23-3106.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-15 | Robust speech datasets and ASR software tools
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-16 | International Standard Language Resource Number (ISLRN) implemented by ELRA and LDC ELRA and LDC partner to implement ISLRN process and assign identifiers to all the Language Resources in their catalogues.
Following the meeting of the largest NLP organizations, the NLP12, and their endorsement of the International Standard Language Resource Number (ISLRN), ELRA and LDC partnered to implement the ISLRN process and to assign identifiers to all the Language Resources (LRs) in their catalogues. The ISLRN web portal was designed to enable the assignment of unique identifiers as a service free of charge for all Language Resource providers. To enhance the use of ISLRN, ELRA and LDC have collaborated to provide the ISLRN 13-digit ID to all the Language Resources distributed in their respective catalogues. Anyone who is searching the ELRA and LDC catalogues can see that each Language Resource is now identified by both the data centre ID and the ISLRN number. All providers and users of such LRs should refer to the latter in their own publications and whenever referring to the LR.
ELRA and LDC will continue their joint involvement in ISLRN through active participation in this web service.
Visit the ELRA and LDC catalogues, respectively at http://catalogue.elra.info and https://catalog.ldc.upenn.edu
Background The International Standard Language Resource Number (ISLRN) aims to provide unique identifiers using a standardised nomenclature, thus ensuring that LRs are correctly identified, and consequently, recognised with proper references for their usage in applications within R&D projects, product evaluation and benchmarking, as well as in documents and scientific papers. Moreover, this is a major step in the networked and shared world that Human Language Technologies (HLT) has become: unique resources must be identified as such and meta-catalogues need a common identification format to manage data correctly.
***About NLP12*** Representatives of the major Natural Language Processing and Computational Linguistics organizations met in Paris on 18 November 2013 to harmonize and coordinate their activities within the field.
*** About ELRA *** The Linguistic Data Consortium (LDC) is an open consortium of universities, libraries, corporations and research laboratories that creates and distributes linguistic resources for language-related education, research and technology development. To find out more about LDC, please visit our web site: https://www.ldc.upenn.edu
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-17 | Base de données LIBRE et GRATUITE pour la reconnaissance du locuteur Je me permet de vous solliciter pour contribuer à la création
d?une base de données LIBRE et GRATUITE
pour la reconnaissance du locuteur.
Plus de détails et la marche à suivre ci-dessous.
Merci beaucoup,
Anthony Larcher
Récemment, un certain nombre de laboratoires spécialisés dans la reconnaissance du locuteur dépendante du texte ont initié le projet RedDots.
Il s?agit d?une initiative volontaire sur financement propre des laboratoires.
Ce projet encourage des discussions sur les thèmes de la reconnaissance du locuteur,
la collection de corpus et les cas d?usage propres à cette technologie à travers un Google Group.
Dans le cadre du projet RedDots, l?Institute for Infocomm Research (Singapour) a développé une application Android
qui permet d?enregistrer des données sur un téléphone portable.
Cette base de données a pour but de pallier certaines lacunes des corpus existants:
- le coût (certaines bases standard sont vendues à plusieurs milliers d?euro)
- la taille limitée (le nombre limité de locuteurs ne permet plus d?évaluer les systèmes de reconnaissance de manière significative)
- la variabilité limitée (les données sont actuellement enregistrées dans plus de 5 pays dans le monde entier)
Afin de distributer une base de données, qui puisse bénéficier librement
à l?ensemble de la communauté de recherche nous vous sollicitons.
Comment faire et en combien de temps?
- inscrivez vous en 2 minutes à l?adresse suivante
- installez l?application Android sur votre téléphone en 2 minutes, saisissez l'ID et mot de passe qui vous seront envoyé par email
- enregistrez une session 3 minutes sur votre téléphone
Tout se fait en moins de 10 minutes?
Une des limitations principale des corpus existant est le nombre limité de sessions
enregistrée par locuteur et le court intervalle de temps au cours duquel ces sessions sont enregistrées.
Afin de combler ce manque nous espérons que chaque participant acceptera d?enregistrer
plusieurs sessions dans les mois à venir.
Idealement, chaque participant enregistrera 3 ou 4 minutes par semaine pendant un an.
Ou vont mes données et pour quoi sont elles utilisées?
Les données sont actuellement envoyées sur un serveur de l?Institute for Infocomm Research
à Singapour. Un institut de recherche public.
En vous enregistrant, vous acceptez que ces données soient utilisées à des fins de recherche
uniquement. ces données seront mise à disposition en ligne gratuitement tout au long du projet.
Merci pour votre contribution, n?hésitez pas à faire circuler cet email.
Plus de détails seront données prochainement dans un article soumis à INTERSPEECH 2015.
Anthony Larcher
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-18 | ISLRN adopted by Joint Research Center (JRC) of the European Commission JRC, the EC's Joint Research Centre, an important LR player: First to adopt the ISLRN initiative
The Joint Research Centre (JRC), the European Commission's in house science service, is the first organisation to use the International Standard Language Resource Number (ISLRN) initiative and has requested ISLRN 13-digit unique identifiers to its Language Resources (LR).
The current JRC LRs (downloadable from https://ec.europa.eu/jrc/en/language-technologies) with an ISLRN ID are:
Background The International Standard Language Resource Number (ISLRN) aims to provide unique identifiers using a standardised nomenclature, thus ensuring that LRs are correctly identified, and consequently, recognised with proper references for their usage in applications within R&D projects, product evaluation and benchmarking, as well as in documents and scientific papers. Moreover, this is a major step in the networked and shared world that Human Language Technologies (HLT) has become: unique resources must be identified as such and meta-catalogues need a common identification format to manage data correctly.
*** About the JRC *** As the Commission's in-house science service, the Joint Research Centre's mission is to provide EU policies with independent, evidence-based scientific and technical support throughout the whole policy cycle.
*** About ELRA ***
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-19 | Forensic database of voice recordings of 500+ Australian English speakers Forensic database of voice recordings of 500+ Australian English speakers
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-20 | Audio and Electroglottographic speech recordings
Audio and Electroglottographic speech recordings from several languages We are happy to announce the public availability of speech recordings made as part of the UCLA project 'Production and Perception of Linguistic Voice Quality'. http://www.phonetics.ucla.edu/voiceproject/voice.html Audio and EGG recordings are available for Bo, Gujarati, Hmong, Mandarin, Black Miao, Southern Yi, Santiago Matatlan/ San Juan Guelavia Zapotec; audio recordings (no EGG) are available for English and Mandarin. Recordings of Jalapa Mazatec extracted from the UCLA Phonetic Archive are also posted. All recordings are accompanied by explanatory notes and wordlists, and most are accompanied by Praat textgrids that locate target segments of interest to our project. Analysis software developed as part of the project – VoiceSauce for audio analysis and EggWorks for EGG analysis – and all project publications are also available from this site. All preliminary analyses of the recordings using these tools (i.e. acoustic and EGG parameter values extracted from the recordings) are posted on the site in large data spreadsheets. All of these materials are made freely available under a Creative Commons Attribution-NonCommercial-ShareAlike-3.0 Unported License. This project was funded by NSF grant BCS-0720304 to Pat Keating, Abeer Alwan and Jody Kreiman of UCLA, and Christina Esposito of Macalester College. Pat Keating (UCLA)
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-21 | Press release: Opening of the ELRA License Wizard Press Release - Immediate - Paris, France, April 2, 2015
Currently, the License Wizard allows the user to choose among several licenses that exist for the use of Language Resources: ELRA, Creative Commons and META-SHARE.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-22 | EEG-face tracking- audio 24 GB data set Kara One, Toronto, Canada We are making 24 GB of a new dataset, called Kara One, freely available. This database combines 3 modalities (EEG, face tracking, and audio) during imagined and articulated speech using phonologically-relevant phonemic and single-word prompts. It is the result of a collaboration between the Toronto Rehabilitation Institute (in the University Health Network) and the Department of Computer Science at the University of Toronto.
In the associated paper (abstract below), we show how to accurately classify imagined phonological categories solely from EEG data. Specifically, we obtain up to 90% accuracy in classifying imagined consonants from imagined vowels and up to 95% accuracy in classifying stimulus from active imagination states using advanced deep-belief networks.
Data from 14 participants are available here: http://www.cs.toronto.edu/~complingweb/data/karaOne/karaOne.html.
If you have any questions, please contact Frank Rudzicz at frank@cs.toronto.edu.
Best regards, Frank
PAPER Shunan Zhao and Frank Rudzicz (2015) Classifying phonological categories in imagined and articulated speech. In Proceedings of ICASSP 2015, Brisbane Australia ABSTRACT This paper presents a new dataset combining 3 modalities (EEG, facial, and audio) during imagined and vocalized phonemic and single-word prompts. We pre-process the EEG data, compute features for all 3 modalities, and perform binary classi?cation of phonological categories using a combination of these modalities. For example, a deep-belief network obtains accuracies over 90% on identifying consonants, which is signi?cantly more accurate than two baseline supportvectormachines. Wealsoclassifybetweenthedifferent states (resting, stimuli, active thinking) of the recording, achievingaccuraciesof95%. Thesedatamaybeusedtolearn multimodal relationships, and to develop silent-speech and brain-computer interfaces.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-23 | TORGO data base free for academic use. In the spirit of the season, I would like to announce the immediate availability of the TORGO database free, in perpetuity for academic use. This database combines acoustics and electromagnetic articulography from 8 individuals with speech disorders and 7 without, and totals over 18 GB. These data can be used for multimodal models (e.g., for acoustic-articulatory inversion), models of pathology, and augmented speech recognition, for example. More information (and the database itself) can be found here: http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html.
| ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-24 | Datatang Datatang is a global leading data provider that specialized in data customized solution, focusing in variety speech, image, and text data collection, annotation, crowdsourcing services.
1, Speech data collection 2, Speech data synthesis 3, Speech data transcription I’ve attached our company introduction as reference, as well as available speech data lists as follows:
If you find any particular interested datasets, we could provide you samples with costs too.
Regards
Runze Zhao Oversea Sales Manager | Datatang Technology China M: +86 185 1698 2583 18 Zhongguancun St. Kemao Building Tower B 18F Beijing 100190
US M: +1 617 763 4722 640 W California Ave, Suite 210 Sunnyvale, CA 94086
|
5-3-1 | ROCme!: a free tool for audio corpora recording and management ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.
| ||
5-3-2 | VocalTractLab 2.0 : A tool for articulatory speech synthesis VocalTractLab 2.0 : A tool for articulatory speech synthesis
| ||
5-3-3 | Bob signal-processing and machine learning toolbox (v.1.2..0)
It is developed by the Biometrics Group at Idiap in Switzerland.
Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group
IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
| ||
5-3-4 | COVAREP: A Cooperative Voice Analysis Repository for Speech Technologies ======================
CALL for contributions
======================
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
Website - http://covarep.github.io/covarep
GitHub - https://github.com/covarep/covarep
| ||
5-3-5 | Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox).Release of the version 2 of FASST (Flexible Audio Source Separation Toolbox). http://bass-db.gforge.inria.fr/fasst/ This toolbox is intended to speed up the conception and to automate the implementation of new model-based audio source separation algorithms. It has the following additions compared to version 1: * Core in C++ * User scripts in MATLAB or python * Speedup * Multichannel audio input We provide 2 examples: 1. two-channel instantaneous NMF 2. real-world speech enhancement (2nd CHiME Challenge, Track 1)
| ||
5-3-6 | Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures. We are glad to announce the public realease of the Cantor Digitalis, an open-source real-time singing synthesizer controlled by hand gestures. It can be used e.g. for making music or for singing voice pedagogy. A wide variety of voices are available, from the classic vocal quartet (soprano, alto, tenor, bass), to the extreme colors of childish, breathy, roaring, etc. voices. All the features of vocal sounds are entirely under control, as the synthesis method is based on a mathematic model of voice production, without prerecording segments. The instrument is controlled using chironomy, i.e. hand gestures, with the help of interfaces like stylus or fingers on a graphic tablet, or computer mouse. Vocal dimensions such as the melody, vocal effort, vowel, voice tension, vocal tract size, breathiness etc. can easily and continuously be controlled during performance, and special voices can be prepared in advance or using presets. Check out the capabilities of Cantor Digitalis, through performances extracts from the ensemble Chorus Digitalis: http://youtu.be/_LTjM3Lihis?t=13s. In pratice, this release provides:
Regards,
The Cantor Digitalis team (who loves feedback — cantordigitalis@limsi.fr) Christophe d'Alessandro, Lionel Feugère, Olivier Perrotin http://cantordigitalis.limsi.fr/
| ||
5-3-7 | MultiVec: a Multilingual and MultiLevel Representation Learning Toolkit for NLP
We are happy to announce the release of our new toolkit “MultiVec” for computing continuous representations for text at different granularity levels (word-level or sequences of words). MultiVec includes Mikolov et al. [2013b]’s word2vec features, Le and Mikolov [2014]’s paragraph vector (batch and online) and Luong et al. [2015]’s model for bilingual distributed representations. MultiVec also includes different distance measures between words and sequences of words. The toolkit is written in C++ and is aimed at being fast (in the same order of magnitude as word2vec), easy to use, and easy to extend. It has been evaluated on several NLP tasks: the analogical reasoning task, sentiment analysis, and crosslingual document classification. The toolkit also includes C++ and Python libraries, that you can use to query bilingual and monolingual models.
The project is fully open to future contributions. The code is provided on the project webpage (https://github.com/eske/multivec) with installation instructions and command-line usage examples.
When you use this toolkit, please cite:
@InProceedings{MultiVecLREC2016, Title = {{MultiVec: a Multilingual and MultiLevel Representation Learning Toolkit for NLP}}, Author = {Alexandre Bérard and Christophe Servan and Olivier Pietquin and Laurent Besacier}, Booktitle = {The 10th edition of the Language Resources and Evaluation Conference (LREC 2016)}, Year = {2016}, Month = {May} }
The paper is available here: https://github.com/eske/multivec/raw/master/docs/Berard_and_al-MultiVec_a_Multilingual_and_Multilevel_Representation_Learning_Toolkit_for_NLP-LREC2016.pdf
Best regards,
Alexandre Bérard, Christophe Servan, Olivier Pietquin and Laurent Besacier
| ||
5-3-8 | An android application for speech data collection LIG_AIKUMA We are pleased to announce the release of LIG_AIKUMA, an android application for speech data collection, specially dedicated to language documentation. LIG_AIKUMA is an improved version of the Android application (AIKUMA) initially developed by Steven Bird and colleagues. Features were added to the app in order to facilitate the collection of parallel speech data in line with the requirements of a French-German project (ANR/DFG BULB - Breaking the Unwritten Language Barrier).
The resulting app, called LIG-AIKUMA, runs on various mobile phones and tablets and proposes a range of different speech collection modes (recording, respeaking, translation and elicitation). It was used for field data collections in Congo-Brazzaville resulting in a total of over 80 hours of speech.
Users who just want to use the app without access to the code can download it directly from the forge direct link: https://forge.imag.fr/frs/download.php/706/MainActivity.apk
Code is also available on demand (contact elodie.gauthier@imag.fr and laurent.besacier@imag.fr).
More details on LIG_AIKUMA can be found on the following paper: http://www.sciencedirect.com/science/article/pii/S1877050916300448
|