ISCApad #168 |
Sunday, June 10, 2012 by Chris Wellekens |
5-1-1 | Gokhan Tur , R De Mori, Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Title: Spoken Language Understanding: Systems for Extracting Semantic Information from Speech Editors: Gokhan Tur and Renato De Mori Web: http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470688246.html Brief Description (please use as you see fit): Spoken language understanding (SLU) is an emerging field in between speech and language processing, investigating human/ machine and human/ human communication by leveraging technologies from signal processing, pattern recognition, machine learning and artificial intelligence. SLU systems are designed to extract the meaning from speech utterances and its applications are vast, from voice search in mobile devices to meeting summarization, attracting interest from both commercial and academic sectors. Both human/machine and human/human communications can benefit from the application of SLU, using differing tasks and approaches to better understand and utilize such communications. This book covers the state-of-the-art approaches for the most popular SLU tasks with chapters written by well-known researchers in the respective fields. Key features include: Presents a fully integrated view of the two distinct disciplines of speech processing and language processing for SLU tasks. Defines what is possible today for SLU as an enabling technology for enterprise (e.g., customer care centers or company meetings), and consumer (e.g., entertainment, mobile, car, robot, or smart environments) applications and outlines the key research areas. Provides a unique source of distilled information on methods for computer modeling of semantic information in human/machine and human/human conversations. This book can be successfully used for graduate courses in electronics engineering, computer science or computational linguistics. Moreover, technologists interested in processing spoken communications will find it a useful source of collated information of the topic drawn from the two distinct disciplines of speech processing and language processing under the new area of SLU.
| |||||
5-1-2 | Jody Kreiman, Diana Van Lancker Sidtis ,Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception Foundations of Voice Studies: An Interdisciplinary Approach to Voice Production and Perception
| |||||
5-1-3 | G. Nick Clements and Rachid Ridouane, Where Do Phonological Features Come From?
Where Do Phonological Features Come From?
Edited by G. Nick Clements and Rachid Ridouane CNRS & Sorbonne-Nouvelle This volume offers a timely reconsideration of the function, content, and origin of phonological features, in a set of papers that is theoretically diverse yet thematically strongly coherent. Most of the papers were originally presented at the International Conference 'Where Do Features Come From?' held at the Sorbonne University, Paris, October 4-5, 2007. Several invited papers are included as well. The articles discuss issues concerning the mental status of distinctive features, their role in speech production and perception, the relation they bear to measurable physical properties in the articulatory and acoustic/auditory domains, and their role in language development. Multiple disciplinary perspectives are explored, including those of general linguistics, phonetic and speech sciences, and language acquisition. The larger goal was to address current issues in feature theory and to take a step towards synthesizing recent advances in order to present a current 'state of the art' of the field.
| |||||
5-1-4 | Dorothea Kolossa and Reinhold Haeb-Umbach: Robust Speech Recognition of Uncertain or Missing DataTitle: Robust Speech Recognition of Uncertain or Missing Data Editors: Dorothea Kolossa and Reinhold Haeb-Umbach Publisher: Springer Year: 2011 ISBN 978-3-642-21316-8 Link: http://www.springer.com/engineering/signals/book/978-3-642-21316-8?detailsPage=authorsAndEditors Automatic speech recognition suffers from a lack of robustness with respect to noise, reverberation and interfering speech. The growing field of speech recognition in the presence of missing or uncertain input data seeks to ameliorate those problems by using not only a preprocessed speech signal but also an estimate of its reliability to selectively focus on those segments and features that are most reliable for recognition. This book presents the state of the art in recognition in the presence of uncertainty, offering examples that utilize uncertainty information for noise robustness, reverberation robustness, simultaneous recognition of multiple speech signals, and audiovisual speech recognition. The book is appropriate for scientists and researchers in the field of speech recognition who will find an overview of the state of the art in robust speech recognition, professionals working in speech recognition who will find strategies for improving recognition results in various conditions of mismatch, and lecturers of advanced courses on speech processing or speech recognition who will find a reference and a comprehensive introduction to the field. The book assumes an understanding of the fundamentals of speech recognition using Hidden Markov Models.
| |||||
5-1-5 | Mohamed Embarki et Christelle Dodane: La coarticulation
LA COARTICULATION Mohamed Embarki et Christelle Dodane La parole est faite de gestes articulatoires complexes qui se chevauchent dans l’espace et dans le temps. Ces chevauchements, conceptualisés par le terme coarticulation, n’épargnent aucun articulateur. Ils sont repérables dans les mouvements de la mâchoire, des lèvres, de la langue, du voile du palais et des cordesvocales. La coarticulation est aussi attendue par l’auditeur, les segments coarticulés sont mieux perçus. Elle intervient dans les processus cognitifs et linguistiques d’encodage et de décodage de la parole. Bien plus qu’un simple processus, la coarticulation est un domaine de recherche structuré avec des concepts et des modèles propres. Cet ouvrage collectif réunit des contributions inédites de chercheurs internationaux abordant lacoarticulation des points de vue moteur, acoustique, perceptif et linguistique. C’est le premier ouvrage publié en langue française sur cette question et le premier à l’explorer dans différentes langues.
Collection : Langue & Parole, L'Harmattan ISBN : 978-2-296-55503-7 • 25 € • 260 pages
Mohamed Embarki Christelle Dodane
| |||||
5-1-6 | Ben Gold, Nelson Morgan, Dan Ellis :Speech and Audio Signal Processing: Processing and Perception of Speech and Music [Digital]Speech and Audio Signal Processing: Processing and Perception of Speech and Music [2nd edition] Ben Gold, Nelson Morgan, Dan EllisDigital copy: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/product-description/1118142888 Hardcopy available: http://www.amazon.com/Speech-Audio-Signal-Processing-Perception/dp/0470195363/ref=sr_1_1?s=books&ie=UTF8&qid=1319142964&sr=1-1
| |||||
5-1-7 | Video Proceedings ERMITES 2011Actes vidéo des journées ERMITES 2011 'Décomposition Parcimonieuse, Contraction et Structuration pour l'Analyse de Scènes', sont en ligne sur : http://glotin.univ-tln.fr/ERMITES11 On y retrouve (en .mpg) la vingtaine d'heure des conférences de : Y. Bengio, Montréal «Apprentissage Non-Supervisé de Représentations Profondes » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Y_Bengio_1sur4.mp4 ... S. Mallat, Paris « Scattering & Matching Pursuit for Acoustic Sources Separation » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Mallat_1sur3.mp4 ... J.-P. Haton, Nancy « Analyse de Scène et Reconnaissance Stochastique de la Parole » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_JP_Haton_1sur4.mp4 ... M. Kowalski, Paris « Sparsity and structure for audio signal: a *-lasso therapy » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Kowalski_1sur5.mp4 ... O. Adam, Paris « Estimation de Densité de Population de Baleines par Analyse de leurs Chants » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Adam.mp4 X. Halkias, New-York « Detection and Tracking of Dolphin Vocalizations » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Halkias.mp4 J. Razik, Toulon « Sparse coding : from speech to whales » http://lsis.univ-tln.fr/~glotin/ERMITES_2011_Razik.mp4 H. Glotin, Toulon « Suivi & reconstruction du comportement de cétacés par acoustique passive » ps : ERMITES 2012 portera sur la vision (Y. Lecun, Y. Thorpe, P. Courrieu, M Perreira, M. Van Gerven,...)
| |||||
5-1-8 | Zeki Majeed Hassan and Barry Heselwood (Eds): Instrumental Studies in Arabic Phonetics Instrumental Studies in Arabic Phonetics
| |||||
5-1-9 | G. Bailly, P. Perrier & E. Vatikiotis-Batesonn eds : Audiovisual Speech Processing 'Audiovisual
|
5-2-1 | ELRA - Language Resources Catalogue - Update (2012-03) *****************************************************************
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-2 | LDC Newsletter (May 2012) In this newsletter: - LDC Timeline – Two Decades of Milestones - New publications: LDC2012V01
LDC Timeline – Two Decades of Milestones April 15 marks the 'official' 20th anniversary of LDC’s founding. We’ll be featuring highlights from the last two decades in upcoming newsletters, on the web and elsewhere. For a start, here’s a brief timeline of significant milestones. 1992: The University of Pennsylvania is chosen as the host site for LDC in response to a call for proposals issued by DARPA; the mission of the new consortium is to operate as a specialized data publisher and archive guaranteeing widespread, long-term availability of language resources. DARPA provides seed money with the stipulation that LDC become self-sustaining within five years. Mark Liberman assumes duties as LDC’s Director with a staff that grows to four, including Jack Godfrey, the Consortium’s first Executive Director. 1993: LDC’s catalog debuts. Early releases include benchmark data sets such as TIMIT, TIPSTER, CSR and Switchboard, shortly followed by the Penn Treebank. 1994: LDC and NIST (the National Institute of Standards and Technology) enter into a Cooperative R&D Agreement that provides the framework for the continued collaboration between the two organizations. 1995: Collection of conversational telephone speech and broadcast programming and transcription commences. LDC begins its long and continued support for NIST common task evaluations by providing custom data sets for participants. Membership and data license fees prove sufficient to support LDC operations, satisfying the requirement that the Consortium be self-sustaining. 1996: The Lexicon Development project, under the direction of Dr. Cynthia McLemore, begins releasing pronouncing lexicons in Mandarin, German, Egyptian Colloquial Arabic, Spanish, Japanese, and American English. By 1997 all 6 are published. 1997: LDC announces LDC Online, a searchable index of newswire and speech data with associated tools to compute n-gram models, mutual information and other analyses. 1998: LDC adds annotation to its task portfolio. Christopher Cieri joins LDC as Executive Director and develops the annotation operation. 1999: Steven Bird joins LDC; the organization begins to develop tools and best practices for general use. The Annotation Graph Toolkit results from this effort. 2000: LDC expands its support of common task evaluations from providing corpora to coordinating language resources across the program. Early examples include the DARPA TIDES, EARS and GALE programs. 2001: The Arabic treebank project begins. 2002: LDC moves to its current facilities at 3600 Market Street, Philadelphia with a full-time staff of approximately 40 persons. 2004: LDC introduces the Standard and Subscription membership options, allowing members to choose whether to receive all or a subset of the data sets released in a membership year. 2005: LDC makes task specifications and guidelines available through its projects web pages. 2008: LDC introduces programs that provide discounts for continuing members and those who renew early in the year. 2010: LDC inaugurates the Data Scholarship program for students with a demonstrable need for data. 2012: LDC’s full-time staff of 50 and 196 part-time staff support ongoing projects and operations which include collecting, developing and archiving data, data annotation, tool development, sponsored-project support and multiple collaborations with various partners. The general catalog contains over 500 holdings in more than 50 languages. Over 85,000 copies of more than 1300 titles have been distributed to 3200 organizations in 70 countries.
New Publications (1) 2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News was developed by researchers at the Department of Computer Science and Engineering, University of South Florida (USF), Tampa, Florida and the Multimodal Information Group at the National Institute of Standards and Technology (NIST). It contains approximately 60 hours of English broadcast news video data collected by LDC in 1998 and annotated for the 2005 VACE (Video Analysis and Content Extraction) tasks. The tasks covered by the broadcast news domain were human face (FDT) tracking, text strings (TDT) (glyphs rendered within the video image for the text object detection and tracking task) and word level text strings (TDT_Word_Level) (videotext OCR task). The VACE program was established to develop novel algorithms for automatic video content extraction, multi-modal fusion, and event understanding. During VACE Phases I and II, the program made significant progress in the automated detection and tracking of moving objects including faces, hands, people, vehicles and text in four primary video domains: broadcast news, meetings, street surveillance, and unmanned aerial vehicle motion imagery. Initial results were also obtained on automatic analysis of human activities and understanding of video sequences. Three performance evaluations were conducted under the auspices of the VACE program between 2004 and 2007. The 2005 evaluation was administered by USF in collaboration with NIST and guided by an advisory forum including the evaluation participants. The broadcast news recordings were collected by LDC in 1998 from CNN Headline News (CNN-HDL) and ABC World News Tonight (ABC-WNT). CNN HDL is a 24-hour/day cable-TV broadcast which presents top news stories continuously throughout the day. ABC-WNT is a daily 30-minute news broadcast that typically covers about a dozen different news items. Each daily ABC-WNT broadcast and up to four 30-minute sections of CNN-HDL were recorded each day. The CNN segments were drawn from that portion of the daily schedule that happened to include closed captioning. 2005 NIST/USF Evaluation Resources for the VACE Program - Broadcast News is distributed on one hard drive. 2012 Subscription Members will automatically receive one copy of this data. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$6000. * (2) 2009 HYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T03'CoNLLHYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T03' Shared Task Part 1 contains the Catalan, Czech, German and Spanish trial corpora, training corpora, development and test data for the 2009 HYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/'CoNLLHYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/' (Conference on Computational Natural Language Learning) Shared Task Evaluation. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates. The Conference on Computational Natural Language Learning (HYPERLINK 'http://www.cnts.ua.ac.be/conll/'CoNLLHYPERLINK 'http://www.cnts.ua.ac.be/conll/') is accompanied every year by a shared task intended to promote natural language processing applications and evaluate them in a standard setting. In 2008, the shared task focused on English and employed a unified dependency-based formalism and merged the task of syntactic dependency parsing and the task of identifying semantic arguments and labeling them with semantic roles; that data has been released by LDC as 2008 CoNLL Shared Task Data (LDC2009T12). The 2009 task extended the 2008 task to several languages (English plus Catalan, Chinese, Czech, German, Japanese and Spanish). Among the new features were comparison of time and space complexity based on participants' input, and learning curve comparison for languages with large datasets. The 2009 shared task was divided into two subtasks: (1) parsing syntactic dependencies (2) identification of arguments and assignment of semantic roles for each predicate The materials in this release consist of excerpts from the following corpora: Ancora (Spanish + Catalan): 500,000 words each of annotated news text developed by the University of Barcelona, Polytechnic University of Catalonia, the University of Alacante and the University of the Basque Country Prague Dependency Treebank 2.0 (Czech): approximately 2 million words of annotated news, journal and magazine text developed by Charles University; also available through LDC, LDC2006T01 TIGER Treebank + SALSA Corpus (German): approximately 900,000 words of annotated news text and FrameNet annotation developed by the University of Potsdam, Saarland University and the University of Stuttgart 2009 CoNLL Shared Task Part 1 is distributed on one DVD-ROM. 2012 Subscription Members will automatically receive two copies of this data. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$200. * (3) 2009 HYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T04'CoNLLHYPERLINK 'http://www.ldc.upenn.edu/Catalog/CatalogEntry.jsp?catalogId=LDC2012T04' Shared Task Part 2 contains the Chinese and English trial corpora, training corpora, development and test data for the 2009 HYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/'CoNLLHYPERLINK 'http://ufal.mff.cuni.cz/conll2009-st/' (Conference on Computational Natural Language Learning) Shared Task Evaluation. The 2009 Shared Task developed syntactic dependency annotations, including the semantic dependencies model roles of both verbal and nominal predicates. The materials in this release consist of excerpts from the following corpora: Penn Treebank II(LDC95T7) (English): over one million words of annotated English newswire and other text developed by the University of Pennsylvania PropBank(LDC2004T14) (English): semantic annotation of newswire text from Treebank-2 developed by the University of Pennsylvania NomBank(LDC2008T23) (English): argument structure for instances of common nouns in Treebank-2 and Treebank-3 (LDC99T42) texts developed by New York University Chinese Treebank 6.0(LDC2007T36)(Chinese): 780,000 words (over 1.28 million characters) of annotated Chinese newswire, magazine and administrative texts and transcripts from various broadcast news programs developed by the University of Pennsylvania and the University of Colorado Chinese Proposition Bank 2.0(LDC2008T07) (Chinese): predicate-argument annotation on 500,000 words from Chinese Treebank 6.0 developed by the University of Pennsylvania and the University of Colorado 2009 CoNLL Shared Task Part 2 is distributed on one CD-ROM. 2012 Subscription Members will automatically receive two copies of this data. 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$850. * (4) USC-SFI MALACH Interviews and Transcripts English was developed by The University of Southern California's Shoah Foundation Institute (USC-SFI), the University of Maryland, IBM and Johns Hopkins University as part of the MALACH (Multilingual Access to Large Spoken HYPERLINK 'http://malach.umiacs.umd.edu/'ArCHivesHYPERLINK 'http://malach.umiacs.umd.edu/') Project. It contains approximately 375 hours of interviews from 784 interviewees along with transcripts and other documentation. Inspired by his experience making Schindler's List, Steven Spielberg established the Survivors of the Shoah Visual History Foundation in 1994 to gather video testimonies from survivors and other witnesses of the Holocaust. While most of those who gave testimony were Jewish survivors, the Foundation also interviewed homosexual survivors, Jehovah's Witness survivors, liberators and liberation witnesses, political prisoners, rescuers and aid providers, Roma and Sinti (Gypsy) survivors, survivors of eugenics policies, and war crimes trials participants. In 2006, the Foundation became part of the Dana and David Dornsife College of Letters, Arts and Sciences at the University of Southern California in Los Angeles and was renamed as the USC Shoah Foundation Institute for Visual History and Education. The goal of the MALACH project was to develop methods for improved access to large multinational spoken archives; the focus was advancing the state of the art of automatic speech recognition (ASR) and information retrieval. The characteristics of the USC-SFI collection -- unconstrained, natural speech filled with disfluencies, heavy accents, age-related co-articulations, un-cued speaker and language switching and emotional speech -- were considered well-suited for that task. The work centered on five languages: English, Czech, Russian, Polish and Slovak. USC-SFI MALACH Interviews and Transcripts English was developed for the English speech recognition experiments. The speech data in this release was collected beginning in 1994 under a wide variety of conditions ranging from quiet to noisy (e.g., airplane over-flights, wind noise, background conversations and highway noise). Approximately 25,000 of all USC-SFI collected interviews are in English and average approximately 2.5 hours each. The 784 interviews included in this release are each a 30 minute section of the corresponding larger interview. The interviews include accented speech over a wide range (e.g., Hungarian, Italian, Yiddish, German and Polish). This release includes transcripts of the first 15 minutes of each interview. The transcripts were created using Transcriber 1.5.1 and later modified. USC-SFI MALACH Interviews and Transcripts English is distributed on five DVD-ROM. 2012 Subscription Members will automatically receive two copies of this data provided that they have submitted a completed copy of the User License Agreement for USC-SFI MALACH Interviews and Transcripts English (LDC2012S05). 2012 Standard Members may request a copy as part of their 16 free membership corpora. Non-members may license this data for US$2000.
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-3 | Speechocean January 2012 update Speechocean - Language Resource Catalogue - New Released (01- 2012) Speechocean, as a global provider of language resources and data services, has more than 200 large-scale databases available in 80+ languages and accents covering the fields of Text to Speech, Automatic Speech Recognition, Text, Machine Translation, Web Search, Videos, Images etc.
Speechocean is glad to announce that more Speech Resources has been released:
Chinese and English Mixing Speech Synthesis Database (Female) The Chinese Mandarin TTS Speech Corpus contains the read speech of a native Chinese Female professional broadcaster recorded in a studio with high SNR (>35dB) over two channels (AKG C4000B microphone and Electroglottography (EGG) sensor). All speech data are segmented and labeled on phone level. Pronunciation lexicon and pitch extract from EEG can also be provided based on demands.
France French Speech Recognition Corpus (desktop) – 50 speakers This France French desktop speech recognition database was collected by SpeechOcean in France. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
UK English Speech Recognition Corpus (desktop) – 50 speakers This UK English desktop speech recognition database was collected by SpeechOcean in England. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (28 males, 22 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
US English Speech Recognition Corpus (desktop) – 50 speakers This US English desktop speech recognition database was collected by SpeechOcean in America. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (25 males, 25 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
Italian Speech Recognition Corpus (desktop) – 50 speakers This Italian desktop speech recognition database was collected by SpeechOcean in Italy. This database is one of our databases of Speech Data ----Desktop Project (SDD) which contains the database collections for 30 languages presently. It contains the voices of 50 different native speakers who were balanced distributed by age (mainly 16 – 30, 31 – 45, 46 – 60), gender (23 males, 27 females) and regional accents. The script was specially designed to provide material for both training and testing of many classes of speech recognition applications. Each speaker recorded 500 utterances in a quiet office environment through two professional microphones. Each utterance is stored as 44.1K 16Bit uncompressed PCM format and accompanied by an ASCII SAM label file which contains the relevant descriptive information. A pronunciation lexicon with a phonemic transcription in SAMPA is also included.
For more information about our Database and Services please visit our website www.Speechocen.com or visit our on-line Catalogue at http://www.speechocean.com/en-Product-Catalogue/Index.html If you have any inquiry regarding our databases and service please feel free to contact us: Xianfeng Cheng mailto: Chengxianfeng@speechocean.com Marta Gherardi mailto: Marta@speechocean.com
| |||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
5-2-4 | Appen ButlerHill
Appen ButlerHill A global leader in linguistic technology solutions RECENT CATALOG ADDITIONS—MARCH 2012 1. Speech Databases 1.1 Telephony
2. Pronunciation Lexica Appen Butler Hill has considerable experience in providing a variety of lexicon types. These include: Pronunciation Lexica providing phonemic representation, syllabification, and stress (primary and secondary as appropriate) Part-of-speech tagged Lexica providing grammatical and semantic labels Other reference text based materials including spelling/mis-spelling lists, spell-check dictionar-ies, mappings of colloquial language to standard forms, orthographic normalization lists. Over a period of 15 years, Appen Butler Hill has generated a significant volume of licensable material for a wide range of languages. For holdings information in a given language or to discuss any customized development efforts, please contact: sales@appenbutlerhill.com
4. Other Language Resources Morphological Analyzers – Farsi/Persian & Urdu Arabic Thesaurus Language Analysis Documentation – multiple languages
For additional information on these resources, please contact: sales@appenbutlerhill.com 5. Customized Requests and Package Configurations Appen Butler Hill is committed to providing a low risk, high quality, reliable solution and has worked in 130+ languages to-date supporting both large global corporations and Government organizations. We would be glad to discuss to any customized requests or package configurations and prepare a cus-tomized proposal to meet your needs.
|
5-3-1 | Matlab toolbox for glottal analysis I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:
http://tcts.fpms.ac.be/~drugman/Toolbox/
This toolbox includes the following modules:
- Pitch and voiced-unvoiced decision estimation - Speech polarity detection - Glottal Closure Instant determination - Glottal flow estimation
By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”: http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf
where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.
Hoping that this might be useful to you, and to see you soon,
Thomas Drugman
| |||||
5-3-2 | ROCme!: a free tool for audio corpora recording and management ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.
| |||||
5-3-3 | VocalTractLab 2.0 : A tool for articulatory speech synthesis VocalTractLab 2.0 : A tool for articulatory speech synthesis
|