ISCA - International Speech
Communication Association


ISCApad Archive  »  2013  »  ISCApad #182  »  Resources  »  Software

ISCApad #182

Saturday, August 10, 2013 by Chris Wellekens

5-3 Software
5-3-1Matlab toolbox for glottal analysis

I am pleased to announce you that we made a Matlab toolbox for glottal analysis now available on the web at:

 

http://tcts.fpms.ac.be/~drugman/Toolbox/

 

This toolbox includes the following modules:

 

- Pitch and voiced-unvoiced decision estimation

- Speech polarity detection

- Glottal Closure Instant determination

- Glottal flow estimation

 

By the way, I am also glad to send you my PhD thesis entitled “Glottal Analysis and its Applications”:

http://tcts.fpms.ac.be/~drugman/files/DrugmanPhDThesis.pdf

 

where you will find applications in speech synthesis, speaker recognition, voice pathology detection, and expressive speech analysis.

 

Hoping that this might be useful to you, and to see you soon,

 

Thomas Drugman

Back  Top

5-3-2ROCme!: a free tool for audio corpora recording and management

ROCme!: nouveau logiciel gratuit pour l'enregistrement et la gestion de corpus audio.

Le logiciel ROCme! permet une gestion rationalisée, autonome et dématérialisée de l’enregistrement de corpus lus.

Caractéristiques clés :
- gratuit
- compatible Windows et Mac
- interface paramétrable pour le recueil de métadonnées sur les locuteurs
- le locuteur fait défiler les phrases à l'écran et les enregistre de façon autonome
- format audio paramétrable

Téléchargeable à cette adresse :
www.ddl.ish-lyon.cnrs.fr/rocme

 
Back  Top

5-3-3VocalTractLab 2.0 : A tool for articulatory speech synthesis

VocalTractLab 2.0 : A tool for articulatory speech synthesis

It is my pleasure to announce the release of the new major version 2.0 of VocalTractLab. VocalTractLab is an articulatory speech synthesizer and a tool to visualize and explore the mechanism of speech production with regard to articulation, acoustics, and control. It is available from http://www.vocaltractlab.de/index.php?page=vocaltractlab-download .
Compared to version 1.0, the new version brings many improvements in terms of the implemented models of the vocal tract, the vocal folds, the acoustic simulation, and articulatory control, as well as in terms of the user interface. Most importantly, the new version comes together with a manual.

If you like, give it a try. Reports on bugs and any other feedback are welcome.

Peter Birkholz

Back  Top

5-3-4Voice analysis toolkit
After just completing my PhD I have made the algorithms I have developed during it available online: https://github.com/jckane/Voice_Analysis_Toolkit 
The so-called Voice Analysis Toolkit contains algorithms for glottal source and voice quality analysis. In making the code available online I hope that people in the speech processing community can benefit from it. I would really appreciate if you could include a link to this in the software section of the next ISCApad (section 5-3).
 
thanks for this.
John
 
--
Researcher
 
Phonetics and Speech Laboratory (Room 4074) Arts Block,
Centre for Language and Communication Studies,
School of Linguistics, Speech and Communication Sciences, Trinity College Dublin, College Green Dublin 2
Phone:    (+353) 1 896 1348 Website:  http://www.tcd.ie/slscs/postgraduate/phd-masters-research/student-pages/johnkane.php
Check out our workshop!! http://muster.ucd.ie/workshops/iast/
Back  Top

5-3-5Bob signal-processing and machine learning toolbox (v.1.2..0)


    The release 1.2.0 of the Bob
      signal-processing and machine learning toolbox
is available .

    Bob provides both efficient implementations of several machine     learning algorithms as well as a framework to help researchers to     publish reproducible research.
   
   

It is developed by the Biometrics
        Group
at Idiap in       Switzerland.

   
    The previous release of Bob was providing:
    * image, video and audio IO
      interfaces
such as jpg, avi, wav, 

    * database
      accessors
such as FRGC, Labelled Face in the Wild, and many     others,

    *       image processing: Local Binary Patterns (LBPs), Gabor Jets,     SIFT,
    * machines
      and trainers
such as Support Vector Machines (SVMs), k-Means,     Gaussian Mixture Models (GMMs), Inter-Session Variability modeling     (ISV), Joint Factor Analysis (JFA), Probabilistic Linear     Discriminant Analysis (PLDA), Bayesian intra/extra (personal)     classifier,

   
    The new release of Bob has brought the following features and/or     improvements, such as:
    * Unified implementation of Local Binary Patterns (LBPs),
    * Histograms of Oriented Gradients (HOG) implementation,
    * Total variability (i-vector) implementation,
    * Conjugate gradient based-implementation for logistic regression,
    * Improved multi-layer perceptrons implementation (Back-propagation     can now be easily used in combination with any optimizer -- i.e     L-BFGS),
    * Pseudo-inverse-based method for Linear Discriminant Analysis,
    * Covariance-based method for Principal Component Analysis,
    * Whitening and within-class covariance normalization techniques,
    * Module for object detection and keypoint localization     (bob.visioner),
    * Module for       audio processing including feature extraction such as LFCC and     MFCC,
    * Improved extensions (satellite packages), that now support both     Python and C++ code, within an easy to use framework,
    * Improved documentation and add new tutorials,
    * Support for Intel's MKL (in addition to ATLAS),
    * Extend supported platforms (Arch Linux).
   
    This release represents a major milestone in Bob with plenty of     functionality improvements (>640       commits in total) and plenty of bug
      fixes
.

    • Sources and       Documentation
    • Binary packages:
    •     Ubuntu: 10.04, 12.04, 12.10 and 13.04
    • For     Mac OSX: works with 10.6 (Snow Leopard), 10.7 (Lion) and 10.8     (Mountain Lion)
   
    For instructions on how to install pre-packaged version on Ubuntu or     OSX, consult our quick       installation instructions  (N.B. OS X macport has not yet been     upgraded. This will be done very soon. cf. https://trac.macports.org/ticket/39831 ).
   
   
    Best regards,
    Elie Khoury (on Behalf of the Biometric
      Group at Idiap
lead by Sebastien
      Marcel
)

   
     
     ---    

-- ------------------- Dr. Elie Khoury Post Doctorant Biometric Person Recognition Group IDIAP Research Institute (Switzerland) Tel : +41 27 721 77 23
Back  Top

5-3-6An open-source repository of advanced speech processing algorithms called COVAREP
CALL for contributions
======================
 
We are pleased to announce the creation of an open-source repository of advanced speech processing algorithms called COVAREP (A Cooperative Voice Analysis Repository for Speech Technologies). COVAREP has been created as a GitHub project (https://github.com/covarep/covarep) where researchers in speech processing can store original implementations of published algorithms.
 
Over the past few decades a vast array of advanced speech processing algorithms have been developed, often offering significant improvements over the existing state-of-the-art. Such algorithms can have a reasonably high degree of complexity and, hence, can be difficult to accurately re-implement based on article descriptions. Another issue is the so-called 'bug magnet effect' with re-implementations frequently having significant differences from the original. The consequence of all this has been that many promising developments have been under-exploited or discarded, with researchers tending to stick to conventional analysis methods.
 
By developing the COVAREP repository we are hoping to address this by encouraging authors to include original implementations of their algorithms, thus resulting in a single de facto version for the speech community to refer to.
 
We envisage a range of benefits to the repository:
1) Reproducible research: COVAREP will allow fairer comparison of algorithms in published articles.
2) Encouraged usage: the free availability of these algorithms will encourage researchers from a wide range of speech-related disciplines (both in academia and industry) to exploit them for their own applications.
3) Feedback: as a GitHub project users will be able to offer comments on algorithms, report bugs, suggest improvements etc.
 
SCOPE
We welcome contributions from a wide range of speech processing areas, including (but not limited to): Speech analysis, synthesis, conversion, transformation, enhancement, speech quality, glottal source/voice quality analysis, etc.
 
REQUIREMENTS
In order to achieve a reasonable standard of consistency and homogeneity across algorithms we have compiled a list of requirements for prospective contributors to the repository. However, we intend the list of the requirements not to be so strict as to discourage contributions.
  • Only published work can be added to the repository
  • The code must be available as open source
  • Algorithms should be coded in Matlab, however we strongly encourage authors to make the code compatible with Octave in order to maximize usability
  • Contributions have to comply with a Coding Convention (see GitHub site for coding convention and template). However, only for normalizing the inputs/outputs and the documentation. There is no restriction for the content of the functions (though, comments are obviously encouraged).
 
LICENCE
Getting contributing institutions to agree to a homogenous IP policy would be close to impossible. As a result COVAREP is a repository and not a toolbox, and each algorithm will have its own licence associated with it. Though flexible to different licence types, contributions will need to have a licence which is compatible with the repository, i.e. {GPL, LGPL, X11, Apache, MIT} or similar. We would encourage contributors to try to obtain LGPL licences from their institutions in order to be more industry friendly.
 
CONTRIBUTE!
We believe that the COVAREP repository has a great potential benefit to the speech research community and we hope that you will consider contributing your published algorithms to it. If you have any questions, comments issues etc regarding COVAREP please contact us on one of the email addresses below. Please forward this email to others who may be interested.
 
Existing contributions include: algorithms for spectral envelope modelling, adaptive sinusoidal modelling, fundamental frequncy/voicing decision/glottal closure instant detection algorithms, methods for detecting non-modal phonation types etc.
 
Gilles Degottex <degottex@csd.uoc.gr>, John Kane <kanejo@tcd.ie>, Thomas Drugman <thomas.drugman@umons.ac.be>, Tuomo Raitio <tuomo.raitio@aalto.fi>, Stefan Scherer <scherer@ict.usc.edu>
 
Back  Top



 Organisation  Events   Membership   Help 
 > Board  > Interspeech  > Join - renew  > Sitemap
 > Legal documents  > Workshops  > Membership directory  > Contact
 > Logos      > FAQ
       > Privacy policy

© Copyright 2024 - ISCA International Speech Communication Association - All right reserved.

Powered by ISCA