ISCApad #183 |
Wednesday, September 11, 2013 by Chris Wellekens |
5-1-1 | G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing 'Audiovisual
| |||||
5-1-2 | Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.): Speech Planning and Dynamics, Publisher P.Lang Fuchs, Susanne / Weirich, Melanie / Pape, Daniel / Perrier, Pascal (eds.) Speech Planning and Dynamics Frankfurt am Main, Berlin, Bern, Bruxelles, New York, Oxford, Wien, 2012. 277 pp., 50 fig., 8 tables Speech Production and Perception. Vol. 1 Edited by Susanne Fuchs and Pascal Perrier Imprimé : ISBN 978-3-631-61479-2 hb. SFR 60.00 / €* 52.95 / €** 54.50 / € 49.50 / £ 39.60 / US$ 64.95 eBook : ISBN 978-3-653-01438-9 SFR 63.20 / €* 58.91 / €** 59.40 / € 49.50 / £ 39.60 / US$ 64.95 Commander en ligne : www.peterlang.com
| |||||
5-1-3 | Video archive of Odyssey Speaker and Language Recognition Workshop, Singapore 2012Odyssey Speaker and Language Recognition Workshop 2012, the workshop of ISCA SIG Speaker and Language Characterization, was held in Singapore on 25-28 June 2012. Odyssey 2012 is glad to announce that its video recordings have been included in the ISCA Video Archive. http://www.isca-speech.org/iscaweb/index.php/archive/video-archive
| |||||
5-1-4 | Tuomas Virtanen, Rita Singh, Bhiksha Raj (editors),Techniques for Noise Robustness in Automatic Speech Recognition,Wiley Techniques for Noise Robustness in Automatic Speech Recognition
| |||||
5-1-5 | Niebuhr, Olivier, Understanding Prosody:The Role of Context, Function and Communication Understanding Prosody: The Role of Context, Function and Communication Ed. by Niebuhr, Oliver Series:Language, Context and Cognition 13, De Gruyter http://www.degruyter.com/view/product/186201?format=G or http://linguistlist.org/pubs/books/get-book.cfm?BookID=63238
The volume represents a state-of-the-art snapshot of the research on prosody for phoneticians, linguists and speech technologists. It covers well-known models and languages. How are prosodies linked to speech sounds? What are the relations between prosody and grammar? What does speech perception tell us about prosody, particularly about the constituting elements of intonation and rhythm? The papers of the volume address questions like these with a special focus on how the notion of context-based coding, the knowledge of prosodic functions and the communicative embedding of prosodic elements can advance our understanding of prosody.
| |||||
5-1-6 | Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p) Albert Di Cristo: « La Prosodie de la Parole : Une Introduction », Editions de Boeck-Solal (296 p).
Sommaire :
Avant –propos, Introduction, ;
Ch.1 : Eléments de définition ;
Ch 2. Situation de la prosodie dans le champ des sciences du langage et dans l’étude de la communication ;
Ch 3. La prosodie sur les deux versants de la communication orale interindividuelle (production et compréhension) ;
Ch 4. La prosodie et le cerveau ;
Ch 5. La matérialité de la prosodie ;
Ch 6. Les niveau d’analyse et de représentation de la prosodie ;
Ch 7. Les théories, les modèles de la prosodie et leurs appareils formels ;
Ch 8 La fonctionnalité plurielle de la prosodie ;
Ch 9. Les relations de la prosodie avec les sens ;
Epilogue.
Suggestions de lecture ;
Index des termes ;
Index des noms propres.
|
5-2-1 | ELRA - Language Resources Catalogue - Update (2013-05) *****************************************************************
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-2 | ELRA releases free Language Resources ELRA releases free Language Resources
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-3 | LDC Newsletter (August 2013)
In this newsletter: - Mixer 6 now available! -
- LDC at Interspeech 2013, Lyon France -
New publications:
- GALE Phase 2 Chinese Broadcast Conversation Parallel Text Part 2 -
- Mixer 6 Speech -
Mixer
6 now available!
The release of Mixer 6 Speech this month marks the first time in close to a decade that LDC has made available a large-scale speech training data collection. Representing more than 15,000 hours of speech from over 500 speakers, Mixer 6 follows in the footsteps of the Switchboard and Fisher studies by providing a large database of rich telephone conversations with the addition of subject interviews and transcript readings. Participants were native American English speakers local to the Philadelphia area, providing further scope for a variety of research tasks. Mixer 6 Speech is a members-only release and a great reason to join the consortium. In addition to this substantial resource, members enjoy rights to other data released in 2013 and can license older publications at reduced fees. Please see the full
Fall 2013 LDC Data Scholarship Program - deadline approaching!
The deadline for the Fall 2013 LDC Data Scholarship Program is one month away! Student applications are being accepted now through September
Students can email their applications to the LDC Data Scholarship program. Decisions will be sent by email from the same address.
LDC at Interspeech 2013, Lyon France
LDC will once again be exhibiting at Interspeech held this year August 25-29 in Lyon. Please stop by LDC’s booth to to learn about recent developments at the Consortium, including new publications.
Also, be on the lookout for the following presentations:
· Speech Activity Detection on YouTube Using Deep Neural Networks
· The Spectral Dynamics of Vowels in Mandarin Chinese
· Automatic Phonetic Segmentation using Boundary Models
LDC will continue to post conference updates via our Facebook page. We hope to see you there!
New
(1)GALE Phase
This release includes 20 source-translation document pairs, comprising 152,894 characters of Chinese source text and its English translation. Data is drawn from six distinct Chinese programs broadcast in 2005-2007 from Phoenix TV, a Hong Kong-based satellite television station. Broadcast conversation programming is generally more interactive than traditional news broadcasts and includes talk shows, interviews, call-in programs and roundtable discussions. The programs in this release focus on current events topics.
The data was transcribed by LDC staff and/or transcription vendors under contract to LDC in accordance with Quick Rich Transcription guidelines developed by LDC. Transcribers indicated sentence boundaries in addition to transcribing the text. Data was manually selected for translation according to several criteria, including linguistic features, transcription features and topic features. The transcribed and segmented files were then reformatted into a human-readable translation format and assigned to translation vendors. Translators followed LDC's Chinese to English translation guidelines. Bilingual LDC staff performed quality control procedures on the completed translations.
GALE Phase 2 Chinese Broadcast Conversation Parallel Text Part 2 is distributed via web download.
2013 Subscription Members will automatically receive two copies of this data on disc. 2013 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$1750.
*
(2)MADCAT (Multilingual
The goal of the MADCAT program is to automatically convert foreign text images into English transcripts. MADCAT Phase 3 data was collected from Arabic source documents in three genres: newswire, weblog and newsgroup text. Arabic speaking scribes copied documents by hand, following specific instructions on writing style (fast, normal, careful), writing implement (pen, pencil) and paper (lined, unlined). Prior to assignment, source documents were processed to optimize their appearance for the handwriting task, which resulted in some original source documents being broken into multiple pages for handwriting. Each resulting handwritten page was assigned to up to five independent scribes, using different writing conditions.
The handwritten, transcribed documents were next checked for quality and completeness, then each page was scanned at a high resolution (600 dpi, greyscale) to create a digital version of the handwritten document. The scanned images were then annotated to indicate the physical coordinates of each line and token. Explicit reading order was also labeled, along with any errors produced by the scribes when copying the text.
The final step was to produce a unified data format that takes multiple data streams and generates a single MADCAT XML output file which contains all required information. The resulting madcat.xml file contains distinct components: a text layer that consists of the source text, tokenization and sentence segmentation; an image layer that consists of bounding boxes; a scribe demographic layer that consists of scribe ID and partition (train/test); and a document metadata layer.
This release includes 4,540 annotation files in both GEDI XML and MADCAT XML formats (gedi.xml and madcat.xml) along with their corresponding scanned image files in TIFF format.
*
(3)Mixer 6
The speech data in this release was collected by LDC at its Human Subjects Collection facilities in Philadelphia. The telephone collection protocol was similar to other LDC telephone studies (e.g., Switchboard-2 Phase III Audio - LDC2002S06): recruited
The multi-microphone portion of the collection utilized 14 distinct microphones installed identically in two mutli-channel audio recording rooms at LDC. Each session was guided by collection staff using prompting and recording software to conduct the following activities: (1) repeat questions (less than one minute), (2) informal conversation (typically 15 minutes), (3) transcript reading (approximately 15 minutes) and (4) telephone call (generally 10 minutes). Speakers recorded up to three 45-minute sessions on distinct days. The 14 channels were recorded synchronously into separate single-channel files, using 16-bit PCM sample encoding at 16000 samples/second.
The recordings in this corpus were used in NIST Speaker Recognition Evaluation (SRE) test sets for 2010 and 2012. Researchers interested in applying those benchmark test sets should consult the respective NIST Evaluation Plans for guidelines on allowable training data for those tests.
The collection contains 4,410 recordings made via the public telephone network and 1,425 sessions of multiple microphone recordings in office-room settings. The telephone recordings are presented as 8-KHz 2-channel NIST SPHERE files, and the microphone recordings are 16-KHz 1-channel flac/ms-wav files.
Mixer 6 Speech is distributed on one hard drive.
.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-4 | Appen ButlerHill
Appen ButlerHill A global leader in linguistic technology solutions RECENT CATALOG ADDITIONS—MARCH 2012 1. Speech Databases 1.1 Telephony
2. Pronunciation Lexica Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include: Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate) Part-of-speech tagged Lexica providing grammatical and semantic labels Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists. Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com
4. Other Language Resources Morphological Analyzers – Farsi/Persian & Urdu Arabic Thesaurus Language Analysis Documentation – multiple languages
For additional information on these resources, please contact: sales@appenbutlerhill.com 5. Customized Requests and Package Configurations Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations. We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-5 | OFROM 1er corpus de français de Suisse romande Nous souhaiterions vous signaler la mise en ligne d'OFROM, premier corpus de français parlé en Suisse romande. L'archive est, dans version actuelle, d'une durée d'environ 15 heures. Elle est transcrite en orthographe standard dans le logiciel Praat. Un concordancier permet d'y effectuer des recherches, et de télécharger les extraits sonores associés aux transcriptions.
Pour accéder aux données et consulter une description plus complète du corpus, nous vous invitons à vous rendre à l'adresse suivante : http://www.unine.ch/ofrom.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-6 | Real-world 16-channel noise recordings We are happy to announce the release of DEMAND, a set of real-world
|
5-3-1 | ROCme!: a free tool for audio corpora recording and management ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.
| |||||
5-3-2 | VocalTractLab 2.0 : A tool for articulatory speech synthesis VocalTractLab 2.0 : A tool for articulatory speech synthesis
| |||||
5-3-3 | Voice analysis toolkit After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
thanks for this.
John Researcher
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2 Phone: (+353) 1 896 1348 Website: http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php Check out our workshop!! http://muster.ucd.ie/workshops/iast/
| |||||
5-3-4 | Bob signal-processing and machine learning toolbox (v.1.2..0)
It is developed by the Biometrics
Group at Idiap in Switzerland. -- ------------------- Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
| |||||
5-3-5 | An open-source repository of advanced speech processing algorithms called COVAREP CALL for contributions
======================
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
Website - http://covarep.github.io/covarep
GitHub - https://github.com/covarep/covarep
|