Compare Products

eSpeak App RWTH ASR App

Features

* Includes different Voices, whose characteristics can be altered. * Can produce speech output as a WAV file. * SSML (Speech Synthesis Markup Language) is supported (not complete), and also HTML. * Compact size. The program and its data, including many languages, totals about 2 Mbytes. * Can be used as a front-end to MBROLA diphone voices. eSpeak converts text to phonemes with pitch and length information. * Can translate text into phoneme codes, so it could be adapted as a front end for another speech synthesis engine. * Potential for other languages. Several are included in varying stages of progress. Help from native speakers for these or other languages is welcome. * Development tools are available for producing and tuning phoneme data. * Written in C.

Features

• Decoder for large vocabulary continuous speech recognition ◦word conditioned tree search (supporting across-word models) ◦optimized HMM emission probability calculation using SIMD instructions ◦refined acoustic pruning using language model lookahead ◦word lattice generation • Feature extraction ◦a flexible framework for data processing: Flow ◦MFCC features ◦PLP features ◦Gammatone features ◦voicedness feature ◦vocal tract length normalization (VTLN) ◦support for several feature dimension reduction methods (e.g. LDA, PCA) ◦easy implementation of new features as well as easy integration of external features using Flow networks • Acoustic modeling ◦Gaussian mixture distributions for HMM emission probabilities ◦phoneme in triphone context (or shorter context) ◦across-word context dependency of phonemes ◦allophone parameter tying using phonetic decision trees (classification and regression trees, CART) ◦globally pooled diagonal covariance matrix (other types of covariance modelling are possible, but not fully tested) ◦maximum likelihood training ◦discriminative training (minimum phone error (MPE) criterion) ◦linear algebra support using LAPACK, BLAS • Language modeling ◦support for language models in ARPA format ◦weighted grammars (weighted finite state automaton) • Neural networks (new in v0.6) ◦training of arbitrarily deep feed-forward networks ◦CUDA support for running on GPUs ◦OpenMP support for running on CPUs ◦variety of activation functions, training criteria and optimization algorithms ◦sequence discriminative training, e.g. MMI or MPE (new in v0.7) ◦integration in feature extraction pipeline ("Tandem approach") ◦integration in search and lattice processing pipeline ("Hybrid NN/HMM approach") • Speaker adaptation ◦Constrained MLLR (CMLLR, "feature space MLLR", fMLLR) ◦Unsupervised maximum likelihood linear regression mean adaptation (MLLR) ◦speaker / segment clustering using Bayesian Information Criterion (BIC) as stop criterion • Lattice processing ◦n-best list generation ◦confusion network generation and decoding ◦lattice rescoring ◦lattice based system combination •input / output formats ◦nearly all input and output data is in easily process-able XML or plain text formats ◦converter tools for the generation of NIST file formats are included ◦HTK lattice format ◦converter tools for HTK models

Languages

C CPP

Languages

CPP

Source Type

Open

Source Type

Closed

License Type

GPL

License Type

Proprietary

OS Type

OS Type

Pricing

  • Free

Pricing

  • Register to the site
X

Compare Products

Select up to three two products to compare by clicking on the compare icon () of each product.

{{compareToolModel.Error}}

Now comparing:

{{product.ProductName | createSubstring:25}} X
Compare Now